00:00:00.000 Started by upstream project "autotest-per-patch" build number 126255 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.100 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.100 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.143 Fetching changes from the remote Git repository 00:00:00.148 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.189 Using shallow fetch with depth 1 00:00:00.189 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.189 > git --version # timeout=10 00:00:00.223 > git --version # 'git version 2.39.2' 00:00:00.223 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.242 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.242 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.547 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.557 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.569 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.569 > git config core.sparsecheckout # timeout=10 00:00:04.578 > git read-tree -mu HEAD # timeout=10 00:00:04.594 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.612 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.613 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.704 [Pipeline] Start of Pipeline 00:00:04.717 [Pipeline] library 00:00:04.718 Loading library shm_lib@master 00:00:04.718 Library shm_lib@master is cached. Copying from home. 00:00:04.731 [Pipeline] node 00:00:04.739 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.740 [Pipeline] { 00:00:04.748 [Pipeline] catchError 00:00:04.749 [Pipeline] { 00:00:04.760 [Pipeline] wrap 00:00:04.767 [Pipeline] { 00:00:04.773 [Pipeline] stage 00:00:04.774 [Pipeline] { (Prologue) 00:00:04.790 [Pipeline] echo 00:00:04.792 Node: VM-host-SM9 00:00:04.797 [Pipeline] cleanWs 00:00:04.803 [WS-CLEANUP] Deleting project workspace... 00:00:04.803 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.807 [WS-CLEANUP] done 00:00:04.973 [Pipeline] setCustomBuildProperty 00:00:05.038 [Pipeline] httpRequest 00:00:05.062 [Pipeline] echo 00:00:05.064 Sorcerer 10.211.164.101 is alive 00:00:05.070 [Pipeline] httpRequest 00:00:05.073 HttpMethod: GET 00:00:05.073 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.073 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.075 Response Code: HTTP/1.1 200 OK 00:00:05.075 Success: Status code 200 is in the accepted range: 200,404 00:00:05.075 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.077 [Pipeline] sh 00:00:06.347 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.361 [Pipeline] httpRequest 00:00:06.385 [Pipeline] echo 00:00:06.386 Sorcerer 10.211.164.101 is alive 00:00:06.395 [Pipeline] httpRequest 00:00:06.398 HttpMethod: GET 00:00:06.399 URL: http://10.211.164.101/packages/spdk_406b3b1b5623aaa2c1d9028f91d64100a2de2b96.tar.gz 00:00:06.399 Sending request to url: http://10.211.164.101/packages/spdk_406b3b1b5623aaa2c1d9028f91d64100a2de2b96.tar.gz 00:00:06.419 Response Code: HTTP/1.1 200 OK 00:00:06.421 Success: Status code 200 is in the accepted range: 200,404 00:00:06.423 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_406b3b1b5623aaa2c1d9028f91d64100a2de2b96.tar.gz 00:01:05.367 [Pipeline] sh 00:01:05.646 + tar --no-same-owner -xf spdk_406b3b1b5623aaa2c1d9028f91d64100a2de2b96.tar.gz 00:01:08.931 [Pipeline] sh 00:01:09.211 + git -C spdk log --oneline -n5 00:01:09.211 406b3b1b5 util: allow NULL saddr/caddr for spdk_net_getaddr 00:01:09.211 1053f1b13 util: don't allow users to pass caddr/cport for listen sockets 00:01:09.211 0663932f5 util: add spdk_net_getaddr 00:01:09.211 9da437b46 util: move module/sock/sock_kernel.h contents to net.c 00:01:09.211 35c6d81e6 util: add spdk_net_get_interface_name 00:01:09.231 [Pipeline] writeFile 00:01:09.252 [Pipeline] sh 00:01:09.534 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:09.546 [Pipeline] sh 00:01:09.823 + cat autorun-spdk.conf 00:01:09.823 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.823 SPDK_TEST_NVMF=1 00:01:09.823 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.823 SPDK_TEST_USDT=1 00:01:09.823 SPDK_TEST_NVMF_MDNS=1 00:01:09.823 SPDK_RUN_UBSAN=1 00:01:09.823 NET_TYPE=virt 00:01:09.823 SPDK_JSONRPC_GO_CLIENT=1 00:01:09.823 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.828 RUN_NIGHTLY=0 00:01:09.831 [Pipeline] } 00:01:09.847 [Pipeline] // stage 00:01:09.864 [Pipeline] stage 00:01:09.866 [Pipeline] { (Run VM) 00:01:09.881 [Pipeline] sh 00:01:10.203 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:10.203 + echo 'Start stage prepare_nvme.sh' 00:01:10.203 Start stage prepare_nvme.sh 00:01:10.203 + [[ -n 2 ]] 00:01:10.203 + disk_prefix=ex2 00:01:10.203 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:10.203 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:10.203 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:10.203 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.203 ++ SPDK_TEST_NVMF=1 00:01:10.203 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.203 ++ SPDK_TEST_USDT=1 00:01:10.203 ++ SPDK_TEST_NVMF_MDNS=1 00:01:10.203 ++ SPDK_RUN_UBSAN=1 00:01:10.203 ++ NET_TYPE=virt 00:01:10.203 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:10.203 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.203 ++ RUN_NIGHTLY=0 00:01:10.203 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:10.203 + nvme_files=() 00:01:10.203 + declare -A nvme_files 00:01:10.203 + backend_dir=/var/lib/libvirt/images/backends 00:01:10.203 + nvme_files['nvme.img']=5G 00:01:10.203 + nvme_files['nvme-cmb.img']=5G 00:01:10.203 + nvme_files['nvme-multi0.img']=4G 00:01:10.203 + nvme_files['nvme-multi1.img']=4G 00:01:10.203 + nvme_files['nvme-multi2.img']=4G 00:01:10.203 + nvme_files['nvme-openstack.img']=8G 00:01:10.203 + nvme_files['nvme-zns.img']=5G 00:01:10.203 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:10.203 + (( SPDK_TEST_FTL == 1 )) 00:01:10.203 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:10.203 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:10.203 + for nvme in "${!nvme_files[@]}" 00:01:10.203 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:10.203 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.203 + for nvme in "${!nvme_files[@]}" 00:01:10.203 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:10.203 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.203 + for nvme in "${!nvme_files[@]}" 00:01:10.203 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:10.203 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:10.203 + for nvme in "${!nvme_files[@]}" 00:01:10.203 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:10.461 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.461 + for nvme in "${!nvme_files[@]}" 00:01:10.461 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:10.461 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.461 + for nvme in "${!nvme_files[@]}" 00:01:10.461 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:10.719 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.719 + for nvme in "${!nvme_files[@]}" 00:01:10.719 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:10.719 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.719 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:10.719 + echo 'End stage prepare_nvme.sh' 00:01:10.719 End stage prepare_nvme.sh 00:01:10.730 [Pipeline] sh 00:01:11.004 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:11.004 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora38 00:01:11.004 00:01:11.004 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:11.004 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:11.004 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:11.004 HELP=0 00:01:11.004 DRY_RUN=0 00:01:11.004 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:11.004 NVME_DISKS_TYPE=nvme,nvme, 00:01:11.004 NVME_AUTO_CREATE=0 00:01:11.004 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:11.004 NVME_CMB=,, 00:01:11.004 NVME_PMR=,, 00:01:11.004 NVME_ZNS=,, 00:01:11.004 NVME_MS=,, 00:01:11.004 NVME_FDP=,, 00:01:11.004 SPDK_VAGRANT_DISTRO=fedora38 00:01:11.004 SPDK_VAGRANT_VMCPU=10 00:01:11.004 SPDK_VAGRANT_VMRAM=12288 00:01:11.004 SPDK_VAGRANT_PROVIDER=libvirt 00:01:11.004 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:11.004 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:11.004 SPDK_OPENSTACK_NETWORK=0 00:01:11.004 VAGRANT_PACKAGE_BOX=0 00:01:11.004 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:11.004 FORCE_DISTRO=true 00:01:11.004 VAGRANT_BOX_VERSION= 00:01:11.004 EXTRA_VAGRANTFILES= 00:01:11.004 NIC_MODEL=e1000 00:01:11.004 00:01:11.004 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:01:11.004 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:14.475 Bringing machine 'default' up with 'libvirt' provider... 00:01:15.064 ==> default: Creating image (snapshot of base box volume). 00:01:15.322 ==> default: Creating domain with the following settings... 00:01:15.322 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721080561_ed2addcf97199efebe68 00:01:15.322 ==> default: -- Domain type: kvm 00:01:15.322 ==> default: -- Cpus: 10 00:01:15.322 ==> default: -- Feature: acpi 00:01:15.322 ==> default: -- Feature: apic 00:01:15.322 ==> default: -- Feature: pae 00:01:15.322 ==> default: -- Memory: 12288M 00:01:15.322 ==> default: -- Memory Backing: hugepages: 00:01:15.322 ==> default: -- Management MAC: 00:01:15.322 ==> default: -- Loader: 00:01:15.322 ==> default: -- Nvram: 00:01:15.322 ==> default: -- Base box: spdk/fedora38 00:01:15.322 ==> default: -- Storage pool: default 00:01:15.322 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721080561_ed2addcf97199efebe68.img (20G) 00:01:15.322 ==> default: -- Volume Cache: default 00:01:15.322 ==> default: -- Kernel: 00:01:15.322 ==> default: -- Initrd: 00:01:15.322 ==> default: -- Graphics Type: vnc 00:01:15.322 ==> default: -- Graphics Port: -1 00:01:15.322 ==> default: -- Graphics IP: 127.0.0.1 00:01:15.322 ==> default: -- Graphics Password: Not defined 00:01:15.322 ==> default: -- Video Type: cirrus 00:01:15.322 ==> default: -- Video VRAM: 9216 00:01:15.322 ==> default: -- Sound Type: 00:01:15.322 ==> default: -- Keymap: en-us 00:01:15.322 ==> default: -- TPM Path: 00:01:15.322 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:15.322 ==> default: -- Command line args: 00:01:15.322 ==> default: -> value=-device, 00:01:15.322 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:15.322 ==> default: -> value=-drive, 00:01:15.322 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:15.322 ==> default: -> value=-device, 00:01:15.322 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.322 ==> default: -> value=-device, 00:01:15.322 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:15.322 ==> default: -> value=-drive, 00:01:15.322 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:15.322 ==> default: -> value=-device, 00:01:15.322 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.322 ==> default: -> value=-drive, 00:01:15.322 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:15.322 ==> default: -> value=-device, 00:01:15.322 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.322 ==> default: -> value=-drive, 00:01:15.322 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:15.322 ==> default: -> value=-device, 00:01:15.322 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.322 ==> default: Creating shared folders metadata... 00:01:15.322 ==> default: Starting domain. 00:01:17.228 ==> default: Waiting for domain to get an IP address... 00:01:35.317 ==> default: Waiting for SSH to become available... 00:01:36.252 ==> default: Configuring and enabling network interfaces... 00:01:40.437 default: SSH address: 192.168.121.161:22 00:01:40.437 default: SSH username: vagrant 00:01:40.437 default: SSH auth method: private key 00:01:42.368 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:50.469 ==> default: Mounting SSHFS shared folder... 00:01:51.893 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:51.893 ==> default: Checking Mount.. 00:01:52.830 ==> default: Folder Successfully Mounted! 00:01:52.830 ==> default: Running provisioner: file... 00:01:53.397 default: ~/.gitconfig => .gitconfig 00:01:53.656 00:01:53.656 SUCCESS! 00:01:53.656 00:01:53.657 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:53.657 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:53.657 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:53.657 00:01:53.666 [Pipeline] } 00:01:53.687 [Pipeline] // stage 00:01:53.698 [Pipeline] dir 00:01:53.699 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:01:53.701 [Pipeline] { 00:01:53.718 [Pipeline] catchError 00:01:53.720 [Pipeline] { 00:01:53.732 [Pipeline] sh 00:01:54.009 + + vagrant ssh-config --host vagrantsed 00:01:54.009 -ne /^Host/,$p 00:01:54.009 + tee ssh_conf 00:01:58.195 Host vagrant 00:01:58.195 HostName 192.168.121.161 00:01:58.195 User vagrant 00:01:58.195 Port 22 00:01:58.195 UserKnownHostsFile /dev/null 00:01:58.195 StrictHostKeyChecking no 00:01:58.195 PasswordAuthentication no 00:01:58.195 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:58.195 IdentitiesOnly yes 00:01:58.195 LogLevel FATAL 00:01:58.195 ForwardAgent yes 00:01:58.195 ForwardX11 yes 00:01:58.195 00:01:58.209 [Pipeline] withEnv 00:01:58.211 [Pipeline] { 00:01:58.228 [Pipeline] sh 00:01:58.505 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:58.505 source /etc/os-release 00:01:58.505 [[ -e /image.version ]] && img=$(< /image.version) 00:01:58.505 # Minimal, systemd-like check. 00:01:58.505 if [[ -e /.dockerenv ]]; then 00:01:58.505 # Clear garbage from the node's name: 00:01:58.505 # agt-er_autotest_547-896 -> autotest_547-896 00:01:58.505 # $HOSTNAME is the actual container id 00:01:58.505 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:58.505 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:58.505 # We can assume this is a mount from a host where container is running, 00:01:58.505 # so fetch its hostname to easily identify the target swarm worker. 00:01:58.505 container="$(< /etc/hostname) ($agent)" 00:01:58.505 else 00:01:58.505 # Fallback 00:01:58.505 container=$agent 00:01:58.505 fi 00:01:58.505 fi 00:01:58.505 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:58.505 00:01:58.515 [Pipeline] } 00:01:58.535 [Pipeline] // withEnv 00:01:58.543 [Pipeline] setCustomBuildProperty 00:01:58.559 [Pipeline] stage 00:01:58.561 [Pipeline] { (Tests) 00:01:58.580 [Pipeline] sh 00:01:58.890 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:58.904 [Pipeline] sh 00:01:59.181 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:59.453 [Pipeline] timeout 00:01:59.454 Timeout set to expire in 40 min 00:01:59.455 [Pipeline] { 00:01:59.472 [Pipeline] sh 00:01:59.756 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:00.322 HEAD is now at 406b3b1b5 util: allow NULL saddr/caddr for spdk_net_getaddr 00:02:00.336 [Pipeline] sh 00:02:00.608 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:00.880 [Pipeline] sh 00:02:01.158 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:01.430 [Pipeline] sh 00:02:01.707 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:01.965 ++ readlink -f spdk_repo 00:02:01.965 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:01.965 + [[ -n /home/vagrant/spdk_repo ]] 00:02:01.965 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:01.965 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:01.965 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:01.965 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:01.965 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:01.965 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:01.965 + cd /home/vagrant/spdk_repo 00:02:01.965 + source /etc/os-release 00:02:01.965 ++ NAME='Fedora Linux' 00:02:01.965 ++ VERSION='38 (Cloud Edition)' 00:02:01.965 ++ ID=fedora 00:02:01.965 ++ VERSION_ID=38 00:02:01.965 ++ VERSION_CODENAME= 00:02:01.965 ++ PLATFORM_ID=platform:f38 00:02:01.965 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:01.965 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:01.965 ++ LOGO=fedora-logo-icon 00:02:01.965 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:01.965 ++ HOME_URL=https://fedoraproject.org/ 00:02:01.965 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:01.965 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:01.965 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:01.965 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:01.965 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:01.965 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:01.965 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:01.965 ++ SUPPORT_END=2024-05-14 00:02:01.965 ++ VARIANT='Cloud Edition' 00:02:01.965 ++ VARIANT_ID=cloud 00:02:01.965 + uname -a 00:02:01.965 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:01.965 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:02.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:02.531 Hugepages 00:02:02.531 node hugesize free / total 00:02:02.531 node0 1048576kB 0 / 0 00:02:02.531 node0 2048kB 0 / 0 00:02:02.531 00:02:02.531 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:02.531 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:02.531 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:02.531 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:02.531 + rm -f /tmp/spdk-ld-path 00:02:02.531 + source autorun-spdk.conf 00:02:02.531 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.531 ++ SPDK_TEST_NVMF=1 00:02:02.531 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.531 ++ SPDK_TEST_USDT=1 00:02:02.531 ++ SPDK_TEST_NVMF_MDNS=1 00:02:02.531 ++ SPDK_RUN_UBSAN=1 00:02:02.531 ++ NET_TYPE=virt 00:02:02.531 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:02.531 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.531 ++ RUN_NIGHTLY=0 00:02:02.531 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:02.531 + [[ -n '' ]] 00:02:02.531 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:02.531 + for M in /var/spdk/build-*-manifest.txt 00:02:02.531 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:02.531 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.531 + for M in /var/spdk/build-*-manifest.txt 00:02:02.531 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:02.531 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.531 ++ uname 00:02:02.531 + [[ Linux == \L\i\n\u\x ]] 00:02:02.531 + sudo dmesg -T 00:02:02.531 + sudo dmesg --clear 00:02:02.531 + dmesg_pid=5176 00:02:02.531 + [[ Fedora Linux == FreeBSD ]] 00:02:02.531 + sudo dmesg -Tw 00:02:02.531 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.531 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.531 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:02.531 + [[ -x /usr/src/fio-static/fio ]] 00:02:02.531 + export FIO_BIN=/usr/src/fio-static/fio 00:02:02.531 + FIO_BIN=/usr/src/fio-static/fio 00:02:02.531 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:02.531 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:02.531 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:02.531 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.531 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.531 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:02.531 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.531 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.531 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:02.531 Test configuration: 00:02:02.531 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.531 SPDK_TEST_NVMF=1 00:02:02.531 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.531 SPDK_TEST_USDT=1 00:02:02.531 SPDK_TEST_NVMF_MDNS=1 00:02:02.531 SPDK_RUN_UBSAN=1 00:02:02.531 NET_TYPE=virt 00:02:02.531 SPDK_JSONRPC_GO_CLIENT=1 00:02:02.531 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.531 RUN_NIGHTLY=0 21:56:49 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:02.531 21:56:49 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:02.531 21:56:49 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:02.531 21:56:49 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:02.531 21:56:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.531 21:56:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.531 21:56:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.531 21:56:49 -- paths/export.sh@5 -- $ export PATH 00:02:02.531 21:56:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.531 21:56:49 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:02.531 21:56:49 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:02.531 21:56:49 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721080609.XXXXXX 00:02:02.531 21:56:49 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721080609.UPOxJQ 00:02:02.531 21:56:49 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:02.531 21:56:49 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:02.531 21:56:49 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:02.531 21:56:49 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:02.531 21:56:49 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:02.531 21:56:49 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:02.531 21:56:49 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:02.531 21:56:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.531 21:56:49 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:02:02.531 21:56:49 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:02.531 21:56:49 -- pm/common@17 -- $ local monitor 00:02:02.531 21:56:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.531 21:56:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.531 21:56:49 -- pm/common@25 -- $ sleep 1 00:02:02.531 21:56:49 -- pm/common@21 -- $ date +%s 00:02:02.531 21:56:49 -- pm/common@21 -- $ date +%s 00:02:02.531 21:56:49 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721080609 00:02:02.531 21:56:49 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721080609 00:02:02.791 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721080609_collect-vmstat.pm.log 00:02:02.791 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721080609_collect-cpu-load.pm.log 00:02:03.723 21:56:50 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:03.723 21:56:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:03.723 21:56:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:03.724 21:56:50 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:03.724 21:56:50 -- spdk/autobuild.sh@16 -- $ date -u 00:02:03.724 Mon Jul 15 09:56:50 PM UTC 2024 00:02:03.724 21:56:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:03.724 v24.09-pre-219-g406b3b1b5 00:02:03.724 21:56:50 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:03.724 21:56:50 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:03.724 21:56:50 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:03.724 21:56:50 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:03.724 21:56:50 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:03.724 21:56:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.724 ************************************ 00:02:03.724 START TEST ubsan 00:02:03.724 ************************************ 00:02:03.724 using ubsan 00:02:03.724 21:56:50 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:03.724 00:02:03.724 real 0m0.000s 00:02:03.724 user 0m0.000s 00:02:03.724 sys 0m0.000s 00:02:03.724 21:56:50 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:03.724 21:56:50 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:03.724 ************************************ 00:02:03.724 END TEST ubsan 00:02:03.724 ************************************ 00:02:03.724 21:56:50 -- common/autotest_common.sh@1142 -- $ return 0 00:02:03.724 21:56:50 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:03.724 21:56:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:03.724 21:56:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:03.724 21:56:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:03.724 21:56:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:03.724 21:56:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:03.724 21:56:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:03.724 21:56:50 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:03.724 21:56:50 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:02:03.724 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:03.724 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.312 Using 'verbs' RDMA provider 00:02:17.429 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:29.624 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:29.624 go version go1.21.1 linux/amd64 00:02:29.624 Creating mk/config.mk...done. 00:02:29.624 Creating mk/cc.flags.mk...done. 00:02:29.624 Type 'make' to build. 00:02:29.624 21:57:15 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:29.624 21:57:15 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:29.624 21:57:15 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:29.624 21:57:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.624 ************************************ 00:02:29.624 START TEST make 00:02:29.624 ************************************ 00:02:29.624 21:57:15 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:29.624 make[1]: Nothing to be done for 'all'. 00:02:47.702 The Meson build system 00:02:47.702 Version: 1.3.1 00:02:47.702 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:47.702 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:47.702 Build type: native build 00:02:47.702 Program cat found: YES (/usr/bin/cat) 00:02:47.702 Project name: DPDK 00:02:47.702 Project version: 24.03.0 00:02:47.702 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:47.702 C linker for the host machine: cc ld.bfd 2.39-16 00:02:47.702 Host machine cpu family: x86_64 00:02:47.702 Host machine cpu: x86_64 00:02:47.702 Message: ## Building in Developer Mode ## 00:02:47.702 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:47.702 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:47.702 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:47.702 Program python3 found: YES (/usr/bin/python3) 00:02:47.702 Program cat found: YES (/usr/bin/cat) 00:02:47.702 Compiler for C supports arguments -march=native: YES 00:02:47.702 Checking for size of "void *" : 8 00:02:47.702 Checking for size of "void *" : 8 (cached) 00:02:47.702 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:47.702 Library m found: YES 00:02:47.702 Library numa found: YES 00:02:47.702 Has header "numaif.h" : YES 00:02:47.702 Library fdt found: NO 00:02:47.702 Library execinfo found: NO 00:02:47.702 Has header "execinfo.h" : YES 00:02:47.702 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:47.702 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:47.702 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:47.702 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:47.702 Run-time dependency openssl found: YES 3.0.9 00:02:47.702 Run-time dependency libpcap found: YES 1.10.4 00:02:47.702 Has header "pcap.h" with dependency libpcap: YES 00:02:47.702 Compiler for C supports arguments -Wcast-qual: YES 00:02:47.702 Compiler for C supports arguments -Wdeprecated: YES 00:02:47.702 Compiler for C supports arguments -Wformat: YES 00:02:47.702 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:47.702 Compiler for C supports arguments -Wformat-security: NO 00:02:47.702 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:47.702 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:47.702 Compiler for C supports arguments -Wnested-externs: YES 00:02:47.702 Compiler for C supports arguments -Wold-style-definition: YES 00:02:47.702 Compiler for C supports arguments -Wpointer-arith: YES 00:02:47.702 Compiler for C supports arguments -Wsign-compare: YES 00:02:47.702 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:47.702 Compiler for C supports arguments -Wundef: YES 00:02:47.702 Compiler for C supports arguments -Wwrite-strings: YES 00:02:47.702 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:47.702 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:47.702 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:47.702 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:47.702 Program objdump found: YES (/usr/bin/objdump) 00:02:47.702 Compiler for C supports arguments -mavx512f: YES 00:02:47.702 Checking if "AVX512 checking" compiles: YES 00:02:47.702 Fetching value of define "__SSE4_2__" : 1 00:02:47.702 Fetching value of define "__AES__" : 1 00:02:47.702 Fetching value of define "__AVX__" : 1 00:02:47.702 Fetching value of define "__AVX2__" : 1 00:02:47.702 Fetching value of define "__AVX512BW__" : (undefined) 00:02:47.702 Fetching value of define "__AVX512CD__" : (undefined) 00:02:47.702 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:47.702 Fetching value of define "__AVX512F__" : (undefined) 00:02:47.702 Fetching value of define "__AVX512VL__" : (undefined) 00:02:47.702 Fetching value of define "__PCLMUL__" : 1 00:02:47.702 Fetching value of define "__RDRND__" : 1 00:02:47.702 Fetching value of define "__RDSEED__" : 1 00:02:47.702 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:47.702 Fetching value of define "__znver1__" : (undefined) 00:02:47.702 Fetching value of define "__znver2__" : (undefined) 00:02:47.702 Fetching value of define "__znver3__" : (undefined) 00:02:47.702 Fetching value of define "__znver4__" : (undefined) 00:02:47.702 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:47.702 Message: lib/log: Defining dependency "log" 00:02:47.702 Message: lib/kvargs: Defining dependency "kvargs" 00:02:47.702 Message: lib/telemetry: Defining dependency "telemetry" 00:02:47.702 Checking for function "getentropy" : NO 00:02:47.702 Message: lib/eal: Defining dependency "eal" 00:02:47.702 Message: lib/ring: Defining dependency "ring" 00:02:47.702 Message: lib/rcu: Defining dependency "rcu" 00:02:47.702 Message: lib/mempool: Defining dependency "mempool" 00:02:47.702 Message: lib/mbuf: Defining dependency "mbuf" 00:02:47.702 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:47.702 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:47.702 Compiler for C supports arguments -mpclmul: YES 00:02:47.702 Compiler for C supports arguments -maes: YES 00:02:47.702 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:47.702 Compiler for C supports arguments -mavx512bw: YES 00:02:47.702 Compiler for C supports arguments -mavx512dq: YES 00:02:47.702 Compiler for C supports arguments -mavx512vl: YES 00:02:47.702 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:47.702 Compiler for C supports arguments -mavx2: YES 00:02:47.702 Compiler for C supports arguments -mavx: YES 00:02:47.702 Message: lib/net: Defining dependency "net" 00:02:47.702 Message: lib/meter: Defining dependency "meter" 00:02:47.702 Message: lib/ethdev: Defining dependency "ethdev" 00:02:47.702 Message: lib/pci: Defining dependency "pci" 00:02:47.702 Message: lib/cmdline: Defining dependency "cmdline" 00:02:47.702 Message: lib/hash: Defining dependency "hash" 00:02:47.702 Message: lib/timer: Defining dependency "timer" 00:02:47.702 Message: lib/compressdev: Defining dependency "compressdev" 00:02:47.702 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:47.702 Message: lib/dmadev: Defining dependency "dmadev" 00:02:47.702 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:47.702 Message: lib/power: Defining dependency "power" 00:02:47.702 Message: lib/reorder: Defining dependency "reorder" 00:02:47.702 Message: lib/security: Defining dependency "security" 00:02:47.702 Has header "linux/userfaultfd.h" : YES 00:02:47.702 Has header "linux/vduse.h" : YES 00:02:47.702 Message: lib/vhost: Defining dependency "vhost" 00:02:47.702 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:47.702 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:47.702 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:47.702 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:47.702 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:47.702 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:47.702 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:47.702 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:47.702 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:47.702 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:47.702 Program doxygen found: YES (/usr/bin/doxygen) 00:02:47.702 Configuring doxy-api-html.conf using configuration 00:02:47.702 Configuring doxy-api-man.conf using configuration 00:02:47.702 Program mandb found: YES (/usr/bin/mandb) 00:02:47.702 Program sphinx-build found: NO 00:02:47.702 Configuring rte_build_config.h using configuration 00:02:47.702 Message: 00:02:47.702 ================= 00:02:47.702 Applications Enabled 00:02:47.702 ================= 00:02:47.702 00:02:47.702 apps: 00:02:47.702 00:02:47.702 00:02:47.702 Message: 00:02:47.702 ================= 00:02:47.702 Libraries Enabled 00:02:47.702 ================= 00:02:47.702 00:02:47.702 libs: 00:02:47.702 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:47.702 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:47.702 cryptodev, dmadev, power, reorder, security, vhost, 00:02:47.702 00:02:47.702 Message: 00:02:47.702 =============== 00:02:47.702 Drivers Enabled 00:02:47.702 =============== 00:02:47.702 00:02:47.702 common: 00:02:47.702 00:02:47.702 bus: 00:02:47.702 pci, vdev, 00:02:47.702 mempool: 00:02:47.702 ring, 00:02:47.702 dma: 00:02:47.702 00:02:47.702 net: 00:02:47.702 00:02:47.702 crypto: 00:02:47.702 00:02:47.702 compress: 00:02:47.702 00:02:47.702 vdpa: 00:02:47.702 00:02:47.702 00:02:47.702 Message: 00:02:47.702 ================= 00:02:47.702 Content Skipped 00:02:47.702 ================= 00:02:47.702 00:02:47.702 apps: 00:02:47.702 dumpcap: explicitly disabled via build config 00:02:47.702 graph: explicitly disabled via build config 00:02:47.702 pdump: explicitly disabled via build config 00:02:47.702 proc-info: explicitly disabled via build config 00:02:47.702 test-acl: explicitly disabled via build config 00:02:47.702 test-bbdev: explicitly disabled via build config 00:02:47.702 test-cmdline: explicitly disabled via build config 00:02:47.702 test-compress-perf: explicitly disabled via build config 00:02:47.702 test-crypto-perf: explicitly disabled via build config 00:02:47.702 test-dma-perf: explicitly disabled via build config 00:02:47.702 test-eventdev: explicitly disabled via build config 00:02:47.702 test-fib: explicitly disabled via build config 00:02:47.702 test-flow-perf: explicitly disabled via build config 00:02:47.702 test-gpudev: explicitly disabled via build config 00:02:47.702 test-mldev: explicitly disabled via build config 00:02:47.702 test-pipeline: explicitly disabled via build config 00:02:47.702 test-pmd: explicitly disabled via build config 00:02:47.702 test-regex: explicitly disabled via build config 00:02:47.702 test-sad: explicitly disabled via build config 00:02:47.702 test-security-perf: explicitly disabled via build config 00:02:47.702 00:02:47.702 libs: 00:02:47.702 argparse: explicitly disabled via build config 00:02:47.702 metrics: explicitly disabled via build config 00:02:47.702 acl: explicitly disabled via build config 00:02:47.702 bbdev: explicitly disabled via build config 00:02:47.702 bitratestats: explicitly disabled via build config 00:02:47.702 bpf: explicitly disabled via build config 00:02:47.702 cfgfile: explicitly disabled via build config 00:02:47.702 distributor: explicitly disabled via build config 00:02:47.702 efd: explicitly disabled via build config 00:02:47.702 eventdev: explicitly disabled via build config 00:02:47.702 dispatcher: explicitly disabled via build config 00:02:47.702 gpudev: explicitly disabled via build config 00:02:47.702 gro: explicitly disabled via build config 00:02:47.702 gso: explicitly disabled via build config 00:02:47.702 ip_frag: explicitly disabled via build config 00:02:47.702 jobstats: explicitly disabled via build config 00:02:47.702 latencystats: explicitly disabled via build config 00:02:47.702 lpm: explicitly disabled via build config 00:02:47.702 member: explicitly disabled via build config 00:02:47.702 pcapng: explicitly disabled via build config 00:02:47.702 rawdev: explicitly disabled via build config 00:02:47.702 regexdev: explicitly disabled via build config 00:02:47.702 mldev: explicitly disabled via build config 00:02:47.702 rib: explicitly disabled via build config 00:02:47.702 sched: explicitly disabled via build config 00:02:47.702 stack: explicitly disabled via build config 00:02:47.702 ipsec: explicitly disabled via build config 00:02:47.702 pdcp: explicitly disabled via build config 00:02:47.702 fib: explicitly disabled via build config 00:02:47.702 port: explicitly disabled via build config 00:02:47.702 pdump: explicitly disabled via build config 00:02:47.702 table: explicitly disabled via build config 00:02:47.702 pipeline: explicitly disabled via build config 00:02:47.702 graph: explicitly disabled via build config 00:02:47.702 node: explicitly disabled via build config 00:02:47.702 00:02:47.702 drivers: 00:02:47.702 common/cpt: not in enabled drivers build config 00:02:47.702 common/dpaax: not in enabled drivers build config 00:02:47.702 common/iavf: not in enabled drivers build config 00:02:47.702 common/idpf: not in enabled drivers build config 00:02:47.702 common/ionic: not in enabled drivers build config 00:02:47.702 common/mvep: not in enabled drivers build config 00:02:47.702 common/octeontx: not in enabled drivers build config 00:02:47.702 bus/auxiliary: not in enabled drivers build config 00:02:47.702 bus/cdx: not in enabled drivers build config 00:02:47.702 bus/dpaa: not in enabled drivers build config 00:02:47.702 bus/fslmc: not in enabled drivers build config 00:02:47.702 bus/ifpga: not in enabled drivers build config 00:02:47.702 bus/platform: not in enabled drivers build config 00:02:47.702 bus/uacce: not in enabled drivers build config 00:02:47.702 bus/vmbus: not in enabled drivers build config 00:02:47.702 common/cnxk: not in enabled drivers build config 00:02:47.702 common/mlx5: not in enabled drivers build config 00:02:47.702 common/nfp: not in enabled drivers build config 00:02:47.702 common/nitrox: not in enabled drivers build config 00:02:47.702 common/qat: not in enabled drivers build config 00:02:47.702 common/sfc_efx: not in enabled drivers build config 00:02:47.702 mempool/bucket: not in enabled drivers build config 00:02:47.702 mempool/cnxk: not in enabled drivers build config 00:02:47.702 mempool/dpaa: not in enabled drivers build config 00:02:47.702 mempool/dpaa2: not in enabled drivers build config 00:02:47.702 mempool/octeontx: not in enabled drivers build config 00:02:47.702 mempool/stack: not in enabled drivers build config 00:02:47.702 dma/cnxk: not in enabled drivers build config 00:02:47.702 dma/dpaa: not in enabled drivers build config 00:02:47.702 dma/dpaa2: not in enabled drivers build config 00:02:47.702 dma/hisilicon: not in enabled drivers build config 00:02:47.702 dma/idxd: not in enabled drivers build config 00:02:47.702 dma/ioat: not in enabled drivers build config 00:02:47.702 dma/skeleton: not in enabled drivers build config 00:02:47.702 net/af_packet: not in enabled drivers build config 00:02:47.702 net/af_xdp: not in enabled drivers build config 00:02:47.702 net/ark: not in enabled drivers build config 00:02:47.702 net/atlantic: not in enabled drivers build config 00:02:47.702 net/avp: not in enabled drivers build config 00:02:47.702 net/axgbe: not in enabled drivers build config 00:02:47.702 net/bnx2x: not in enabled drivers build config 00:02:47.702 net/bnxt: not in enabled drivers build config 00:02:47.702 net/bonding: not in enabled drivers build config 00:02:47.702 net/cnxk: not in enabled drivers build config 00:02:47.702 net/cpfl: not in enabled drivers build config 00:02:47.702 net/cxgbe: not in enabled drivers build config 00:02:47.702 net/dpaa: not in enabled drivers build config 00:02:47.702 net/dpaa2: not in enabled drivers build config 00:02:47.702 net/e1000: not in enabled drivers build config 00:02:47.702 net/ena: not in enabled drivers build config 00:02:47.702 net/enetc: not in enabled drivers build config 00:02:47.702 net/enetfec: not in enabled drivers build config 00:02:47.702 net/enic: not in enabled drivers build config 00:02:47.702 net/failsafe: not in enabled drivers build config 00:02:47.702 net/fm10k: not in enabled drivers build config 00:02:47.702 net/gve: not in enabled drivers build config 00:02:47.702 net/hinic: not in enabled drivers build config 00:02:47.702 net/hns3: not in enabled drivers build config 00:02:47.702 net/i40e: not in enabled drivers build config 00:02:47.702 net/iavf: not in enabled drivers build config 00:02:47.702 net/ice: not in enabled drivers build config 00:02:47.702 net/idpf: not in enabled drivers build config 00:02:47.702 net/igc: not in enabled drivers build config 00:02:47.702 net/ionic: not in enabled drivers build config 00:02:47.702 net/ipn3ke: not in enabled drivers build config 00:02:47.702 net/ixgbe: not in enabled drivers build config 00:02:47.702 net/mana: not in enabled drivers build config 00:02:47.703 net/memif: not in enabled drivers build config 00:02:47.703 net/mlx4: not in enabled drivers build config 00:02:47.703 net/mlx5: not in enabled drivers build config 00:02:47.703 net/mvneta: not in enabled drivers build config 00:02:47.703 net/mvpp2: not in enabled drivers build config 00:02:47.703 net/netvsc: not in enabled drivers build config 00:02:47.703 net/nfb: not in enabled drivers build config 00:02:47.703 net/nfp: not in enabled drivers build config 00:02:47.703 net/ngbe: not in enabled drivers build config 00:02:47.703 net/null: not in enabled drivers build config 00:02:47.703 net/octeontx: not in enabled drivers build config 00:02:47.703 net/octeon_ep: not in enabled drivers build config 00:02:47.703 net/pcap: not in enabled drivers build config 00:02:47.703 net/pfe: not in enabled drivers build config 00:02:47.703 net/qede: not in enabled drivers build config 00:02:47.703 net/ring: not in enabled drivers build config 00:02:47.703 net/sfc: not in enabled drivers build config 00:02:47.703 net/softnic: not in enabled drivers build config 00:02:47.703 net/tap: not in enabled drivers build config 00:02:47.703 net/thunderx: not in enabled drivers build config 00:02:47.703 net/txgbe: not in enabled drivers build config 00:02:47.703 net/vdev_netvsc: not in enabled drivers build config 00:02:47.703 net/vhost: not in enabled drivers build config 00:02:47.703 net/virtio: not in enabled drivers build config 00:02:47.703 net/vmxnet3: not in enabled drivers build config 00:02:47.703 raw/*: missing internal dependency, "rawdev" 00:02:47.703 crypto/armv8: not in enabled drivers build config 00:02:47.703 crypto/bcmfs: not in enabled drivers build config 00:02:47.703 crypto/caam_jr: not in enabled drivers build config 00:02:47.703 crypto/ccp: not in enabled drivers build config 00:02:47.703 crypto/cnxk: not in enabled drivers build config 00:02:47.703 crypto/dpaa_sec: not in enabled drivers build config 00:02:47.703 crypto/dpaa2_sec: not in enabled drivers build config 00:02:47.703 crypto/ipsec_mb: not in enabled drivers build config 00:02:47.703 crypto/mlx5: not in enabled drivers build config 00:02:47.703 crypto/mvsam: not in enabled drivers build config 00:02:47.703 crypto/nitrox: not in enabled drivers build config 00:02:47.703 crypto/null: not in enabled drivers build config 00:02:47.703 crypto/octeontx: not in enabled drivers build config 00:02:47.703 crypto/openssl: not in enabled drivers build config 00:02:47.703 crypto/scheduler: not in enabled drivers build config 00:02:47.703 crypto/uadk: not in enabled drivers build config 00:02:47.703 crypto/virtio: not in enabled drivers build config 00:02:47.703 compress/isal: not in enabled drivers build config 00:02:47.703 compress/mlx5: not in enabled drivers build config 00:02:47.703 compress/nitrox: not in enabled drivers build config 00:02:47.703 compress/octeontx: not in enabled drivers build config 00:02:47.703 compress/zlib: not in enabled drivers build config 00:02:47.703 regex/*: missing internal dependency, "regexdev" 00:02:47.703 ml/*: missing internal dependency, "mldev" 00:02:47.703 vdpa/ifc: not in enabled drivers build config 00:02:47.703 vdpa/mlx5: not in enabled drivers build config 00:02:47.703 vdpa/nfp: not in enabled drivers build config 00:02:47.703 vdpa/sfc: not in enabled drivers build config 00:02:47.703 event/*: missing internal dependency, "eventdev" 00:02:47.703 baseband/*: missing internal dependency, "bbdev" 00:02:47.703 gpu/*: missing internal dependency, "gpudev" 00:02:47.703 00:02:47.703 00:02:47.960 Build targets in project: 85 00:02:47.960 00:02:47.960 DPDK 24.03.0 00:02:47.960 00:02:47.960 User defined options 00:02:47.960 buildtype : debug 00:02:47.960 default_library : shared 00:02:47.960 libdir : lib 00:02:47.960 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:47.960 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:47.960 c_link_args : 00:02:47.960 cpu_instruction_set: native 00:02:47.960 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:47.960 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:47.960 enable_docs : false 00:02:47.960 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:47.960 enable_kmods : false 00:02:47.960 max_lcores : 128 00:02:47.960 tests : false 00:02:47.960 00:02:47.960 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:48.892 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:48.892 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:48.892 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:48.892 [3/268] Linking static target lib/librte_kvargs.a 00:02:48.892 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:48.892 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:48.892 [6/268] Linking static target lib/librte_log.a 00:02:49.825 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.825 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:49.825 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:50.082 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:50.082 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:50.082 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:50.340 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:50.340 [14/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.340 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:50.340 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:50.340 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:50.340 [18/268] Linking static target lib/librte_telemetry.a 00:02:50.340 [19/268] Linking target lib/librte_log.so.24.1 00:02:50.617 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:50.875 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:50.875 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:51.137 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:51.137 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:51.404 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:51.404 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:51.404 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:51.404 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:51.662 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.662 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:51.662 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:51.662 [32/268] Linking target lib/librte_telemetry.so.24.1 00:02:51.662 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:51.920 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:52.177 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:52.177 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:52.435 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:52.435 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:52.435 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:52.435 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:52.435 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:52.692 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:52.692 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:52.692 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:53.258 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:53.258 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:53.515 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:53.515 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:53.515 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:53.515 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:53.773 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:54.031 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:54.289 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:54.289 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:54.289 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:54.547 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:54.547 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:54.805 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:54.805 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:54.805 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:55.063 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:55.321 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:55.321 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:55.579 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:55.579 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:55.838 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:55.838 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:56.096 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:56.354 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:56.354 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:56.354 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:56.611 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:56.867 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:56.867 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:56.867 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:56.867 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:57.432 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:57.432 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:57.432 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:57.691 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:57.691 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:57.949 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:57.949 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:57.949 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:57.949 [85/268] Linking static target lib/librte_eal.a 00:02:58.207 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:58.207 [87/268] Linking static target lib/librte_ring.a 00:02:58.474 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:58.752 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:58.752 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:58.752 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:58.752 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:59.009 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.009 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:59.009 [95/268] Linking static target lib/librte_rcu.a 00:02:59.009 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:59.009 [97/268] Linking static target lib/librte_mempool.a 00:02:59.009 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:59.574 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:59.574 [100/268] Linking static target lib/librte_mbuf.a 00:02:59.832 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:59.832 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.832 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:59.832 [104/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:59.832 [105/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:00.090 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:00.090 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:00.348 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:00.348 [109/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.606 [110/268] Linking static target lib/librte_net.a 00:03:00.606 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:00.606 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:00.864 [113/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:00.864 [114/268] Linking static target lib/librte_meter.a 00:03:00.864 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:00.864 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.864 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.121 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:01.378 [119/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.378 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:01.636 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:01.893 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:01.893 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:02.151 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:02.151 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:02.408 [126/268] Linking static target lib/librte_pci.a 00:03:02.408 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:02.408 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:02.408 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:02.408 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:02.665 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:02.665 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:02.665 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.665 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:02.665 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:02.921 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:02.921 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:02.921 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:02.921 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:02.921 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:02.921 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:02.921 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:02.921 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:02.921 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:02.921 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:03.179 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:03.179 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:03.436 [148/268] Linking static target lib/librte_ethdev.a 00:03:03.436 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:03.694 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:03.694 [151/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:03.694 [152/268] Linking static target lib/librte_cmdline.a 00:03:03.951 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:03.951 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:03.951 [155/268] Linking static target lib/librte_timer.a 00:03:04.209 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:04.466 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.466 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:04.466 [159/268] Linking static target lib/librte_hash.a 00:03:04.467 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:04.467 [161/268] Linking static target lib/librte_compressdev.a 00:03:04.724 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:04.983 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.983 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:05.246 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:05.504 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:05.762 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:05.762 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:05.762 [169/268] Linking static target lib/librte_dmadev.a 00:03:05.762 [170/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.019 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.019 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:06.019 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:06.019 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.277 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:06.533 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:06.533 [177/268] Linking static target lib/librte_cryptodev.a 00:03:06.790 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:06.790 [179/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:07.048 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:07.048 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:07.048 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.048 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:07.048 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:07.306 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:07.562 [186/268] Linking static target lib/librte_power.a 00:03:07.820 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:07.820 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:07.820 [189/268] Linking static target lib/librte_security.a 00:03:07.820 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:07.820 [191/268] Linking static target lib/librte_reorder.a 00:03:07.820 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:08.078 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:08.642 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.642 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:08.642 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.900 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.900 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:09.158 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:09.158 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:09.417 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.417 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:09.675 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:09.675 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:09.934 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:09.934 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:09.934 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:09.934 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:10.192 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:10.192 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:10.450 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:10.450 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:10.450 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:10.450 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:10.450 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:10.450 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:10.450 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:10.450 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:10.708 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:10.708 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:10.708 [221/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:10.708 [222/268] Linking static target drivers/librte_bus_pci.a 00:03:10.708 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.708 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:10.708 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.708 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.708 [227/268] Linking static target drivers/librte_mempool_ring.a 00:03:11.274 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.840 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.840 [230/268] Linking target lib/librte_eal.so.24.1 00:03:12.098 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:12.098 [232/268] Linking target lib/librte_ring.so.24.1 00:03:12.098 [233/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:12.098 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:12.098 [235/268] Linking target lib/librte_pci.so.24.1 00:03:12.098 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:12.098 [237/268] Linking target lib/librte_timer.so.24.1 00:03:12.098 [238/268] Linking target lib/librte_meter.so.24.1 00:03:12.098 [239/268] Linking static target lib/librte_vhost.a 00:03:12.098 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:12.098 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:12.098 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:12.098 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:12.355 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:12.355 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:12.355 [246/268] Linking target lib/librte_rcu.so.24.1 00:03:12.355 [247/268] Linking target lib/librte_mempool.so.24.1 00:03:12.355 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:12.355 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:12.355 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:12.355 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:12.614 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:12.614 [253/268] Linking target lib/librte_net.so.24.1 00:03:12.614 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:12.614 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:12.614 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:12.873 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:12.873 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:12.873 [259/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.873 [260/268] Linking target lib/librte_cmdline.so.24.1 00:03:12.873 [261/268] Linking target lib/librte_hash.so.24.1 00:03:12.873 [262/268] Linking target lib/librte_security.so.24.1 00:03:12.873 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:12.873 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:13.132 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:13.132 [266/268] Linking target lib/librte_power.so.24.1 00:03:13.391 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.668 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:13.668 INFO: autodetecting backend as ninja 00:03:13.668 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:14.601 CC lib/log/log.o 00:03:14.601 CC lib/log/log_flags.o 00:03:14.601 CC lib/log/log_deprecated.o 00:03:14.601 CC lib/ut/ut.o 00:03:14.601 CC lib/ut_mock/mock.o 00:03:14.859 LIB libspdk_log.a 00:03:14.859 SO libspdk_log.so.7.0 00:03:14.859 LIB libspdk_ut.a 00:03:14.859 LIB libspdk_ut_mock.a 00:03:14.859 SO libspdk_ut.so.2.0 00:03:14.859 SO libspdk_ut_mock.so.6.0 00:03:15.117 SYMLINK libspdk_log.so 00:03:15.117 SYMLINK libspdk_ut.so 00:03:15.117 SYMLINK libspdk_ut_mock.so 00:03:15.117 CXX lib/trace_parser/trace.o 00:03:15.117 CC lib/dma/dma.o 00:03:15.117 CC lib/util/base64.o 00:03:15.117 CC lib/util/bit_array.o 00:03:15.117 CC lib/util/cpuset.o 00:03:15.117 CC lib/util/crc16.o 00:03:15.117 CC lib/util/crc32.o 00:03:15.117 CC lib/util/crc32c.o 00:03:15.117 CC lib/ioat/ioat.o 00:03:15.375 CC lib/util/crc32_ieee.o 00:03:15.375 CC lib/vfio_user/host/vfio_user_pci.o 00:03:15.375 CC lib/util/crc64.o 00:03:15.375 CC lib/vfio_user/host/vfio_user.o 00:03:15.375 CC lib/util/dif.o 00:03:15.375 LIB libspdk_dma.a 00:03:15.375 SO libspdk_dma.so.4.0 00:03:15.633 CC lib/util/fd.o 00:03:15.633 SYMLINK libspdk_dma.so 00:03:15.633 CC lib/util/fd_group.o 00:03:15.633 CC lib/util/file.o 00:03:15.633 CC lib/util/hexlify.o 00:03:15.633 LIB libspdk_ioat.a 00:03:15.633 CC lib/util/iov.o 00:03:15.633 SO libspdk_ioat.so.7.0 00:03:15.633 CC lib/util/math.o 00:03:15.633 SYMLINK libspdk_ioat.so 00:03:15.633 CC lib/util/net.o 00:03:15.633 CC lib/util/pipe.o 00:03:15.633 CC lib/util/strerror_tls.o 00:03:15.891 LIB libspdk_vfio_user.a 00:03:15.891 CC lib/util/string.o 00:03:15.891 CC lib/util/uuid.o 00:03:15.891 SO libspdk_vfio_user.so.5.0 00:03:15.891 CC lib/util/xor.o 00:03:15.891 CC lib/util/zipf.o 00:03:15.891 SYMLINK libspdk_vfio_user.so 00:03:16.150 LIB libspdk_util.a 00:03:16.150 LIB libspdk_trace_parser.a 00:03:16.409 SO libspdk_util.so.9.1 00:03:16.409 SO libspdk_trace_parser.so.5.0 00:03:16.409 SYMLINK libspdk_trace_parser.so 00:03:16.409 SYMLINK libspdk_util.so 00:03:16.723 CC lib/json/json_parse.o 00:03:16.723 CC lib/conf/conf.o 00:03:16.723 CC lib/vmd/vmd.o 00:03:16.723 CC lib/json/json_util.o 00:03:16.723 CC lib/json/json_write.o 00:03:16.723 CC lib/vmd/led.o 00:03:16.723 CC lib/rdma_provider/common.o 00:03:16.723 CC lib/idxd/idxd.o 00:03:16.723 CC lib/rdma_utils/rdma_utils.o 00:03:16.723 CC lib/env_dpdk/env.o 00:03:17.019 CC lib/idxd/idxd_user.o 00:03:17.019 CC lib/idxd/idxd_kernel.o 00:03:17.019 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:17.019 CC lib/env_dpdk/memory.o 00:03:17.019 LIB libspdk_conf.a 00:03:17.019 LIB libspdk_json.a 00:03:17.019 CC lib/env_dpdk/pci.o 00:03:17.019 LIB libspdk_rdma_utils.a 00:03:17.019 SO libspdk_conf.so.6.0 00:03:17.019 SO libspdk_json.so.6.0 00:03:17.019 LIB libspdk_rdma_provider.a 00:03:17.019 SO libspdk_rdma_utils.so.1.0 00:03:17.276 SO libspdk_rdma_provider.so.6.0 00:03:17.276 SYMLINK libspdk_json.so 00:03:17.276 SYMLINK libspdk_conf.so 00:03:17.276 CC lib/env_dpdk/init.o 00:03:17.276 CC lib/env_dpdk/threads.o 00:03:17.276 CC lib/env_dpdk/pci_ioat.o 00:03:17.276 SYMLINK libspdk_rdma_utils.so 00:03:17.276 SYMLINK libspdk_rdma_provider.so 00:03:17.276 CC lib/env_dpdk/pci_virtio.o 00:03:17.276 LIB libspdk_idxd.a 00:03:17.276 SO libspdk_idxd.so.12.0 00:03:17.276 CC lib/env_dpdk/pci_vmd.o 00:03:17.276 LIB libspdk_vmd.a 00:03:17.276 CC lib/env_dpdk/pci_idxd.o 00:03:17.276 SO libspdk_vmd.so.6.0 00:03:17.276 SYMLINK libspdk_idxd.so 00:03:17.276 CC lib/env_dpdk/pci_event.o 00:03:17.276 CC lib/jsonrpc/jsonrpc_server.o 00:03:17.276 CC lib/env_dpdk/sigbus_handler.o 00:03:17.533 SYMLINK libspdk_vmd.so 00:03:17.533 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:17.533 CC lib/env_dpdk/pci_dpdk.o 00:03:17.533 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:17.533 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:17.533 CC lib/jsonrpc/jsonrpc_client.o 00:03:17.533 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:17.791 LIB libspdk_jsonrpc.a 00:03:17.791 SO libspdk_jsonrpc.so.6.0 00:03:18.049 SYMLINK libspdk_jsonrpc.so 00:03:18.308 CC lib/rpc/rpc.o 00:03:18.308 LIB libspdk_env_dpdk.a 00:03:18.566 SO libspdk_env_dpdk.so.14.1 00:03:18.566 LIB libspdk_rpc.a 00:03:18.566 SO libspdk_rpc.so.6.0 00:03:18.566 SYMLINK libspdk_rpc.so 00:03:18.566 SYMLINK libspdk_env_dpdk.so 00:03:18.825 CC lib/notify/notify.o 00:03:18.825 CC lib/notify/notify_rpc.o 00:03:18.825 CC lib/keyring/keyring_rpc.o 00:03:18.825 CC lib/trace/trace.o 00:03:18.825 CC lib/keyring/keyring.o 00:03:18.825 CC lib/trace/trace_flags.o 00:03:18.825 CC lib/trace/trace_rpc.o 00:03:19.083 LIB libspdk_notify.a 00:03:19.083 LIB libspdk_keyring.a 00:03:19.083 SO libspdk_notify.so.6.0 00:03:19.083 SO libspdk_keyring.so.1.0 00:03:19.083 LIB libspdk_trace.a 00:03:19.083 SYMLINK libspdk_notify.so 00:03:19.342 SO libspdk_trace.so.10.0 00:03:19.342 SYMLINK libspdk_keyring.so 00:03:19.342 SYMLINK libspdk_trace.so 00:03:19.602 CC lib/thread/iobuf.o 00:03:19.602 CC lib/thread/thread.o 00:03:19.602 CC lib/sock/sock.o 00:03:19.602 CC lib/sock/sock_rpc.o 00:03:20.169 LIB libspdk_sock.a 00:03:20.169 SO libspdk_sock.so.10.0 00:03:20.169 SYMLINK libspdk_sock.so 00:03:20.428 CC lib/nvme/nvme_ctrlr.o 00:03:20.428 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:20.428 CC lib/nvme/nvme_fabric.o 00:03:20.428 CC lib/nvme/nvme_ns.o 00:03:20.428 CC lib/nvme/nvme_ns_cmd.o 00:03:20.428 CC lib/nvme/nvme_pcie_common.o 00:03:20.428 CC lib/nvme/nvme_pcie.o 00:03:20.428 CC lib/nvme/nvme.o 00:03:20.428 CC lib/nvme/nvme_qpair.o 00:03:21.362 LIB libspdk_thread.a 00:03:21.362 SO libspdk_thread.so.10.1 00:03:21.362 CC lib/nvme/nvme_quirks.o 00:03:21.362 CC lib/nvme/nvme_transport.o 00:03:21.362 SYMLINK libspdk_thread.so 00:03:21.362 CC lib/nvme/nvme_discovery.o 00:03:21.621 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:21.621 CC lib/accel/accel.o 00:03:21.621 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:21.621 CC lib/nvme/nvme_tcp.o 00:03:21.621 CC lib/nvme/nvme_opal.o 00:03:21.879 CC lib/nvme/nvme_io_msg.o 00:03:21.879 CC lib/nvme/nvme_poll_group.o 00:03:22.196 CC lib/nvme/nvme_zns.o 00:03:22.454 CC lib/nvme/nvme_stubs.o 00:03:22.454 CC lib/nvme/nvme_auth.o 00:03:22.711 CC lib/nvme/nvme_cuse.o 00:03:22.711 CC lib/nvme/nvme_rdma.o 00:03:22.967 CC lib/accel/accel_rpc.o 00:03:22.967 CC lib/blob/blobstore.o 00:03:22.967 CC lib/accel/accel_sw.o 00:03:23.225 CC lib/blob/request.o 00:03:23.225 CC lib/blob/zeroes.o 00:03:23.225 CC lib/blob/blob_bs_dev.o 00:03:23.483 LIB libspdk_accel.a 00:03:23.740 SO libspdk_accel.so.15.1 00:03:23.740 SYMLINK libspdk_accel.so 00:03:23.740 CC lib/init/json_config.o 00:03:23.740 CC lib/init/subsystem.o 00:03:23.740 CC lib/init/subsystem_rpc.o 00:03:23.740 CC lib/init/rpc.o 00:03:23.740 CC lib/virtio/virtio.o 00:03:23.998 CC lib/bdev/bdev.o 00:03:23.998 CC lib/bdev/bdev_rpc.o 00:03:23.998 CC lib/virtio/virtio_vhost_user.o 00:03:23.998 CC lib/virtio/virtio_vfio_user.o 00:03:23.998 CC lib/virtio/virtio_pci.o 00:03:24.256 CC lib/bdev/bdev_zone.o 00:03:24.256 LIB libspdk_init.a 00:03:24.256 SO libspdk_init.so.5.0 00:03:24.256 SYMLINK libspdk_init.so 00:03:24.256 CC lib/bdev/part.o 00:03:24.256 CC lib/bdev/scsi_nvme.o 00:03:24.514 LIB libspdk_virtio.a 00:03:24.514 CC lib/event/app.o 00:03:24.514 CC lib/event/reactor.o 00:03:24.514 CC lib/event/log_rpc.o 00:03:24.514 CC lib/event/app_rpc.o 00:03:24.773 CC lib/event/scheduler_static.o 00:03:24.773 SO libspdk_virtio.so.7.0 00:03:24.773 SYMLINK libspdk_virtio.so 00:03:25.031 LIB libspdk_nvme.a 00:03:25.290 SO libspdk_nvme.so.13.1 00:03:25.290 LIB libspdk_event.a 00:03:25.290 SO libspdk_event.so.14.0 00:03:25.548 SYMLINK libspdk_event.so 00:03:25.548 SYMLINK libspdk_nvme.so 00:03:26.923 LIB libspdk_bdev.a 00:03:26.923 SO libspdk_bdev.so.15.1 00:03:27.181 SYMLINK libspdk_bdev.so 00:03:27.181 LIB libspdk_blob.a 00:03:27.457 SO libspdk_blob.so.11.0 00:03:27.457 CC lib/nbd/nbd.o 00:03:27.457 CC lib/nbd/nbd_rpc.o 00:03:27.457 CC lib/nvmf/ctrlr.o 00:03:27.457 CC lib/nvmf/ctrlr_discovery.o 00:03:27.457 CC lib/ublk/ublk.o 00:03:27.457 CC lib/nvmf/ctrlr_bdev.o 00:03:27.457 CC lib/nvmf/subsystem.o 00:03:27.457 CC lib/scsi/dev.o 00:03:27.457 CC lib/ftl/ftl_core.o 00:03:27.457 SYMLINK libspdk_blob.so 00:03:27.457 CC lib/ftl/ftl_init.o 00:03:27.715 CC lib/ftl/ftl_layout.o 00:03:27.715 CC lib/scsi/lun.o 00:03:27.974 CC lib/scsi/port.o 00:03:27.974 LIB libspdk_nbd.a 00:03:27.974 CC lib/ublk/ublk_rpc.o 00:03:27.974 SO libspdk_nbd.so.7.0 00:03:28.233 CC lib/ftl/ftl_debug.o 00:03:28.233 CC lib/scsi/scsi.o 00:03:28.233 CC lib/scsi/scsi_bdev.o 00:03:28.233 CC lib/ftl/ftl_io.o 00:03:28.233 SYMLINK libspdk_nbd.so 00:03:28.233 CC lib/ftl/ftl_sb.o 00:03:28.233 CC lib/ftl/ftl_l2p.o 00:03:28.233 CC lib/ftl/ftl_l2p_flat.o 00:03:28.491 CC lib/ftl/ftl_nv_cache.o 00:03:28.491 LIB libspdk_ublk.a 00:03:28.491 SO libspdk_ublk.so.3.0 00:03:28.491 SYMLINK libspdk_ublk.so 00:03:28.491 CC lib/nvmf/nvmf.o 00:03:28.491 CC lib/nvmf/nvmf_rpc.o 00:03:28.491 CC lib/ftl/ftl_band.o 00:03:28.491 CC lib/ftl/ftl_band_ops.o 00:03:28.750 CC lib/ftl/ftl_writer.o 00:03:28.750 CC lib/ftl/ftl_rq.o 00:03:28.750 CC lib/scsi/scsi_pr.o 00:03:29.008 CC lib/nvmf/transport.o 00:03:29.267 CC lib/scsi/scsi_rpc.o 00:03:29.267 CC lib/nvmf/tcp.o 00:03:29.267 CC lib/lvol/lvol.o 00:03:29.267 CC lib/blobfs/blobfs.o 00:03:29.525 CC lib/scsi/task.o 00:03:29.525 CC lib/ftl/ftl_reloc.o 00:03:29.783 LIB libspdk_scsi.a 00:03:29.783 SO libspdk_scsi.so.9.0 00:03:29.783 CC lib/nvmf/stubs.o 00:03:30.042 SYMLINK libspdk_scsi.so 00:03:30.042 CC lib/nvmf/mdns_server.o 00:03:30.042 CC lib/nvmf/rdma.o 00:03:30.042 CC lib/nvmf/auth.o 00:03:30.300 CC lib/ftl/ftl_l2p_cache.o 00:03:30.300 CC lib/iscsi/conn.o 00:03:30.300 CC lib/blobfs/tree.o 00:03:30.300 CC lib/ftl/ftl_p2l.o 00:03:30.300 CC lib/ftl/mngt/ftl_mngt.o 00:03:30.585 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:30.585 LIB libspdk_blobfs.a 00:03:30.585 SO libspdk_blobfs.so.10.0 00:03:30.843 SYMLINK libspdk_blobfs.so 00:03:30.843 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:30.843 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:30.843 LIB libspdk_lvol.a 00:03:30.843 SO libspdk_lvol.so.10.0 00:03:30.843 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:31.101 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:31.101 SYMLINK libspdk_lvol.so 00:03:31.101 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:31.101 CC lib/vhost/vhost.o 00:03:31.101 CC lib/vhost/vhost_rpc.o 00:03:31.101 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:31.101 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:31.101 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:31.358 CC lib/iscsi/init_grp.o 00:03:31.358 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:31.358 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:31.358 CC lib/iscsi/iscsi.o 00:03:31.358 CC lib/vhost/vhost_scsi.o 00:03:31.615 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:31.615 CC lib/iscsi/md5.o 00:03:31.615 CC lib/ftl/utils/ftl_conf.o 00:03:31.873 CC lib/iscsi/param.o 00:03:31.873 CC lib/vhost/vhost_blk.o 00:03:31.873 CC lib/vhost/rte_vhost_user.o 00:03:31.873 CC lib/ftl/utils/ftl_md.o 00:03:31.873 CC lib/iscsi/portal_grp.o 00:03:31.873 CC lib/iscsi/tgt_node.o 00:03:32.131 CC lib/ftl/utils/ftl_mempool.o 00:03:32.131 CC lib/iscsi/iscsi_subsystem.o 00:03:32.131 CC lib/ftl/utils/ftl_bitmap.o 00:03:32.388 CC lib/iscsi/iscsi_rpc.o 00:03:32.388 CC lib/iscsi/task.o 00:03:32.645 CC lib/ftl/utils/ftl_property.o 00:03:32.645 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:32.645 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:32.902 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:32.902 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:32.902 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:32.902 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:32.902 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:32.902 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:33.186 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:33.186 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:33.186 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:33.186 CC lib/ftl/base/ftl_base_dev.o 00:03:33.186 CC lib/ftl/base/ftl_base_bdev.o 00:03:33.186 CC lib/ftl/ftl_trace.o 00:03:33.444 LIB libspdk_nvmf.a 00:03:33.444 LIB libspdk_vhost.a 00:03:33.444 LIB libspdk_ftl.a 00:03:33.701 SO libspdk_nvmf.so.19.0 00:03:33.701 SO libspdk_vhost.so.8.0 00:03:33.701 LIB libspdk_iscsi.a 00:03:33.701 SO libspdk_iscsi.so.8.0 00:03:33.701 SYMLINK libspdk_vhost.so 00:03:33.959 SO libspdk_ftl.so.9.0 00:03:33.959 SYMLINK libspdk_nvmf.so 00:03:33.959 SYMLINK libspdk_iscsi.so 00:03:34.524 SYMLINK libspdk_ftl.so 00:03:34.781 CC module/env_dpdk/env_dpdk_rpc.o 00:03:35.038 CC module/sock/posix/posix.o 00:03:35.038 CC module/keyring/file/keyring.o 00:03:35.039 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:35.039 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:35.039 CC module/blob/bdev/blob_bdev.o 00:03:35.039 CC module/accel/ioat/accel_ioat.o 00:03:35.039 CC module/accel/error/accel_error.o 00:03:35.039 CC module/scheduler/gscheduler/gscheduler.o 00:03:35.039 CC module/keyring/linux/keyring.o 00:03:35.039 LIB libspdk_env_dpdk_rpc.a 00:03:35.039 SO libspdk_env_dpdk_rpc.so.6.0 00:03:35.039 LIB libspdk_scheduler_gscheduler.a 00:03:35.039 SO libspdk_scheduler_gscheduler.so.4.0 00:03:35.039 SYMLINK libspdk_env_dpdk_rpc.so 00:03:35.039 CC module/keyring/linux/keyring_rpc.o 00:03:35.039 CC module/accel/error/accel_error_rpc.o 00:03:35.297 CC module/keyring/file/keyring_rpc.o 00:03:35.297 LIB libspdk_scheduler_dpdk_governor.a 00:03:35.297 SYMLINK libspdk_scheduler_gscheduler.so 00:03:35.297 CC module/accel/ioat/accel_ioat_rpc.o 00:03:35.297 LIB libspdk_scheduler_dynamic.a 00:03:35.297 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:35.297 SO libspdk_scheduler_dynamic.so.4.0 00:03:35.297 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:35.297 SYMLINK libspdk_scheduler_dynamic.so 00:03:35.297 LIB libspdk_keyring_file.a 00:03:35.297 SO libspdk_keyring_file.so.1.0 00:03:35.297 LIB libspdk_keyring_linux.a 00:03:35.297 LIB libspdk_blob_bdev.a 00:03:35.555 SO libspdk_keyring_linux.so.1.0 00:03:35.555 LIB libspdk_accel_error.a 00:03:35.555 LIB libspdk_accel_ioat.a 00:03:35.555 SYMLINK libspdk_keyring_file.so 00:03:35.555 SO libspdk_blob_bdev.so.11.0 00:03:35.555 SO libspdk_accel_error.so.2.0 00:03:35.555 SO libspdk_accel_ioat.so.6.0 00:03:35.555 CC module/accel/dsa/accel_dsa.o 00:03:35.555 CC module/accel/dsa/accel_dsa_rpc.o 00:03:35.555 CC module/accel/iaa/accel_iaa_rpc.o 00:03:35.555 CC module/accel/iaa/accel_iaa.o 00:03:35.555 SYMLINK libspdk_blob_bdev.so 00:03:35.555 SYMLINK libspdk_keyring_linux.so 00:03:35.555 SYMLINK libspdk_accel_error.so 00:03:35.555 SYMLINK libspdk_accel_ioat.so 00:03:35.813 LIB libspdk_accel_iaa.a 00:03:35.813 CC module/bdev/error/vbdev_error.o 00:03:35.813 CC module/bdev/delay/vbdev_delay.o 00:03:35.813 CC module/bdev/gpt/gpt.o 00:03:35.813 CC module/bdev/lvol/vbdev_lvol.o 00:03:35.813 CC module/blobfs/bdev/blobfs_bdev.o 00:03:36.071 LIB libspdk_accel_dsa.a 00:03:36.071 SO libspdk_accel_iaa.so.3.0 00:03:36.071 SO libspdk_accel_dsa.so.5.0 00:03:36.071 CC module/bdev/malloc/bdev_malloc.o 00:03:36.071 CC module/bdev/null/bdev_null.o 00:03:36.071 SYMLINK libspdk_accel_iaa.so 00:03:36.071 CC module/bdev/null/bdev_null_rpc.o 00:03:36.071 SYMLINK libspdk_accel_dsa.so 00:03:36.071 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:36.071 LIB libspdk_sock_posix.a 00:03:36.328 CC module/bdev/gpt/vbdev_gpt.o 00:03:36.328 SO libspdk_sock_posix.so.6.0 00:03:36.328 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:36.328 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:36.328 SYMLINK libspdk_sock_posix.so 00:03:36.328 CC module/bdev/error/vbdev_error_rpc.o 00:03:36.586 LIB libspdk_bdev_null.a 00:03:36.586 LIB libspdk_blobfs_bdev.a 00:03:36.586 SO libspdk_bdev_null.so.6.0 00:03:36.586 SO libspdk_blobfs_bdev.so.6.0 00:03:36.586 LIB libspdk_bdev_delay.a 00:03:36.586 SO libspdk_bdev_delay.so.6.0 00:03:36.586 CC module/bdev/nvme/bdev_nvme.o 00:03:36.586 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:36.586 SYMLINK libspdk_blobfs_bdev.so 00:03:36.586 CC module/bdev/passthru/vbdev_passthru.o 00:03:36.586 LIB libspdk_bdev_error.a 00:03:36.586 LIB libspdk_bdev_gpt.a 00:03:36.586 SYMLINK libspdk_bdev_null.so 00:03:36.842 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:36.842 SO libspdk_bdev_error.so.6.0 00:03:36.842 SO libspdk_bdev_gpt.so.6.0 00:03:36.842 SYMLINK libspdk_bdev_delay.so 00:03:36.842 CC module/bdev/nvme/nvme_rpc.o 00:03:36.842 SYMLINK libspdk_bdev_error.so 00:03:36.842 SYMLINK libspdk_bdev_gpt.so 00:03:36.842 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:36.842 LIB libspdk_bdev_lvol.a 00:03:37.100 LIB libspdk_bdev_malloc.a 00:03:37.100 CC module/bdev/raid/bdev_raid.o 00:03:37.100 SO libspdk_bdev_lvol.so.6.0 00:03:37.100 SO libspdk_bdev_malloc.so.6.0 00:03:37.100 CC module/bdev/split/vbdev_split.o 00:03:37.100 CC module/bdev/raid/bdev_raid_rpc.o 00:03:37.100 SYMLINK libspdk_bdev_lvol.so 00:03:37.100 SYMLINK libspdk_bdev_malloc.so 00:03:37.100 CC module/bdev/nvme/bdev_mdns_client.o 00:03:37.100 LIB libspdk_bdev_passthru.a 00:03:37.100 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:37.100 SO libspdk_bdev_passthru.so.6.0 00:03:37.358 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:37.358 SYMLINK libspdk_bdev_passthru.so 00:03:37.358 CC module/bdev/aio/bdev_aio.o 00:03:37.358 CC module/bdev/split/vbdev_split_rpc.o 00:03:37.616 CC module/bdev/raid/bdev_raid_sb.o 00:03:37.616 CC module/bdev/nvme/vbdev_opal.o 00:03:37.616 CC module/bdev/ftl/bdev_ftl.o 00:03:37.616 CC module/bdev/iscsi/bdev_iscsi.o 00:03:37.875 LIB libspdk_bdev_zone_block.a 00:03:37.875 LIB libspdk_bdev_split.a 00:03:37.875 SO libspdk_bdev_zone_block.so.6.0 00:03:37.875 SO libspdk_bdev_split.so.6.0 00:03:37.875 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:37.875 CC module/bdev/raid/raid0.o 00:03:37.875 SYMLINK libspdk_bdev_zone_block.so 00:03:37.875 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:37.875 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:37.875 SYMLINK libspdk_bdev_split.so 00:03:37.875 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:37.875 CC module/bdev/aio/bdev_aio_rpc.o 00:03:38.133 CC module/bdev/raid/raid1.o 00:03:38.133 CC module/bdev/raid/concat.o 00:03:38.133 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:38.133 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:38.391 LIB libspdk_bdev_aio.a 00:03:38.391 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:38.391 LIB libspdk_bdev_ftl.a 00:03:38.391 SO libspdk_bdev_aio.so.6.0 00:03:38.391 SO libspdk_bdev_ftl.so.6.0 00:03:38.391 SYMLINK libspdk_bdev_aio.so 00:03:38.391 SYMLINK libspdk_bdev_ftl.so 00:03:38.391 LIB libspdk_bdev_iscsi.a 00:03:38.649 SO libspdk_bdev_iscsi.so.6.0 00:03:38.649 LIB libspdk_bdev_raid.a 00:03:38.649 SYMLINK libspdk_bdev_iscsi.so 00:03:38.649 SO libspdk_bdev_raid.so.6.0 00:03:38.905 LIB libspdk_bdev_virtio.a 00:03:38.905 SYMLINK libspdk_bdev_raid.so 00:03:38.905 SO libspdk_bdev_virtio.so.6.0 00:03:38.905 SYMLINK libspdk_bdev_virtio.so 00:03:39.837 LIB libspdk_bdev_nvme.a 00:03:39.837 SO libspdk_bdev_nvme.so.7.0 00:03:40.109 SYMLINK libspdk_bdev_nvme.so 00:03:40.674 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:40.674 CC module/event/subsystems/scheduler/scheduler.o 00:03:40.674 CC module/event/subsystems/iobuf/iobuf.o 00:03:40.674 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:40.674 CC module/event/subsystems/sock/sock.o 00:03:40.674 CC module/event/subsystems/vmd/vmd.o 00:03:40.674 CC module/event/subsystems/keyring/keyring.o 00:03:40.674 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:40.674 LIB libspdk_event_sock.a 00:03:40.674 SO libspdk_event_sock.so.5.0 00:03:40.674 LIB libspdk_event_scheduler.a 00:03:40.674 LIB libspdk_event_keyring.a 00:03:40.674 LIB libspdk_event_vhost_blk.a 00:03:40.674 LIB libspdk_event_vmd.a 00:03:40.674 SYMLINK libspdk_event_sock.so 00:03:40.674 LIB libspdk_event_iobuf.a 00:03:40.674 SO libspdk_event_scheduler.so.4.0 00:03:40.674 SO libspdk_event_keyring.so.1.0 00:03:40.930 SO libspdk_event_vhost_blk.so.3.0 00:03:40.931 SO libspdk_event_vmd.so.6.0 00:03:40.931 SO libspdk_event_iobuf.so.3.0 00:03:40.931 SYMLINK libspdk_event_scheduler.so 00:03:40.931 SYMLINK libspdk_event_keyring.so 00:03:40.931 SYMLINK libspdk_event_vhost_blk.so 00:03:40.931 SYMLINK libspdk_event_vmd.so 00:03:40.931 SYMLINK libspdk_event_iobuf.so 00:03:41.187 CC module/event/subsystems/accel/accel.o 00:03:41.444 LIB libspdk_event_accel.a 00:03:41.444 SO libspdk_event_accel.so.6.0 00:03:41.444 SYMLINK libspdk_event_accel.so 00:03:41.702 CC module/event/subsystems/bdev/bdev.o 00:03:41.959 LIB libspdk_event_bdev.a 00:03:41.959 SO libspdk_event_bdev.so.6.0 00:03:41.959 SYMLINK libspdk_event_bdev.so 00:03:42.218 CC module/event/subsystems/ublk/ublk.o 00:03:42.218 CC module/event/subsystems/nbd/nbd.o 00:03:42.218 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:42.218 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:42.218 CC module/event/subsystems/scsi/scsi.o 00:03:42.475 LIB libspdk_event_ublk.a 00:03:42.475 LIB libspdk_event_nbd.a 00:03:42.475 SO libspdk_event_ublk.so.3.0 00:03:42.475 SO libspdk_event_nbd.so.6.0 00:03:42.475 LIB libspdk_event_scsi.a 00:03:42.475 SYMLINK libspdk_event_ublk.so 00:03:42.475 SO libspdk_event_scsi.so.6.0 00:03:42.475 SYMLINK libspdk_event_nbd.so 00:03:42.475 SYMLINK libspdk_event_scsi.so 00:03:42.475 LIB libspdk_event_nvmf.a 00:03:42.733 SO libspdk_event_nvmf.so.6.0 00:03:42.733 SYMLINK libspdk_event_nvmf.so 00:03:42.733 CC module/event/subsystems/iscsi/iscsi.o 00:03:42.733 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:42.991 LIB libspdk_event_iscsi.a 00:03:42.991 LIB libspdk_event_vhost_scsi.a 00:03:42.991 SO libspdk_event_iscsi.so.6.0 00:03:42.991 SO libspdk_event_vhost_scsi.so.3.0 00:03:42.991 SYMLINK libspdk_event_iscsi.so 00:03:42.991 SYMLINK libspdk_event_vhost_scsi.so 00:03:43.249 SO libspdk.so.6.0 00:03:43.249 SYMLINK libspdk.so 00:03:43.508 CC app/spdk_lspci/spdk_lspci.o 00:03:43.508 CC app/trace_record/trace_record.o 00:03:43.508 CXX app/trace/trace.o 00:03:43.508 CC app/spdk_nvme_perf/perf.o 00:03:43.508 CC app/spdk_tgt/spdk_tgt.o 00:03:43.508 CC app/iscsi_tgt/iscsi_tgt.o 00:03:43.508 CC app/nvmf_tgt/nvmf_main.o 00:03:43.508 CC examples/ioat/perf/perf.o 00:03:43.508 CC examples/util/zipf/zipf.o 00:03:43.508 CC test/thread/poller_perf/poller_perf.o 00:03:43.508 LINK spdk_lspci 00:03:43.766 LINK spdk_trace_record 00:03:43.766 LINK spdk_tgt 00:03:43.766 LINK zipf 00:03:43.766 LINK poller_perf 00:03:43.766 LINK ioat_perf 00:03:43.766 LINK iscsi_tgt 00:03:43.766 LINK nvmf_tgt 00:03:44.025 CC app/spdk_nvme_identify/identify.o 00:03:44.025 CC examples/ioat/verify/verify.o 00:03:44.025 LINK spdk_trace 00:03:44.025 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:44.283 LINK verify 00:03:44.283 CC test/dma/test_dma/test_dma.o 00:03:44.283 LINK spdk_nvme_perf 00:03:44.283 LINK interrupt_tgt 00:03:44.283 CC examples/thread/thread/thread_ex.o 00:03:44.283 CC test/app/bdev_svc/bdev_svc.o 00:03:44.283 CC app/spdk_nvme_discover/discovery_aer.o 00:03:44.541 CC app/spdk_top/spdk_top.o 00:03:44.541 LINK bdev_svc 00:03:44.541 CC examples/sock/hello_world/hello_sock.o 00:03:44.541 LINK spdk_nvme_discover 00:03:44.541 LINK thread 00:03:44.541 LINK test_dma 00:03:44.798 CC examples/vmd/lsvmd/lsvmd.o 00:03:44.798 LINK hello_sock 00:03:44.798 CC examples/idxd/perf/perf.o 00:03:44.798 TEST_HEADER include/spdk/accel.h 00:03:44.798 TEST_HEADER include/spdk/accel_module.h 00:03:45.056 TEST_HEADER include/spdk/assert.h 00:03:45.056 TEST_HEADER include/spdk/barrier.h 00:03:45.056 TEST_HEADER include/spdk/base64.h 00:03:45.056 TEST_HEADER include/spdk/bdev.h 00:03:45.056 TEST_HEADER include/spdk/bdev_module.h 00:03:45.056 TEST_HEADER include/spdk/bdev_zone.h 00:03:45.056 TEST_HEADER include/spdk/bit_array.h 00:03:45.056 LINK lsvmd 00:03:45.056 TEST_HEADER include/spdk/bit_pool.h 00:03:45.056 TEST_HEADER include/spdk/blob_bdev.h 00:03:45.056 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:45.056 TEST_HEADER include/spdk/blobfs.h 00:03:45.056 TEST_HEADER include/spdk/blob.h 00:03:45.056 TEST_HEADER include/spdk/conf.h 00:03:45.056 TEST_HEADER include/spdk/config.h 00:03:45.056 TEST_HEADER include/spdk/cpuset.h 00:03:45.056 TEST_HEADER include/spdk/crc16.h 00:03:45.056 TEST_HEADER include/spdk/crc32.h 00:03:45.056 TEST_HEADER include/spdk/crc64.h 00:03:45.056 TEST_HEADER include/spdk/dif.h 00:03:45.056 TEST_HEADER include/spdk/dma.h 00:03:45.056 TEST_HEADER include/spdk/endian.h 00:03:45.056 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:45.056 TEST_HEADER include/spdk/env_dpdk.h 00:03:45.056 TEST_HEADER include/spdk/env.h 00:03:45.056 TEST_HEADER include/spdk/event.h 00:03:45.056 TEST_HEADER include/spdk/fd_group.h 00:03:45.056 TEST_HEADER include/spdk/fd.h 00:03:45.056 CC examples/vmd/led/led.o 00:03:45.056 TEST_HEADER include/spdk/file.h 00:03:45.056 TEST_HEADER include/spdk/ftl.h 00:03:45.056 TEST_HEADER include/spdk/gpt_spec.h 00:03:45.056 TEST_HEADER include/spdk/hexlify.h 00:03:45.056 TEST_HEADER include/spdk/histogram_data.h 00:03:45.056 TEST_HEADER include/spdk/idxd.h 00:03:45.056 TEST_HEADER include/spdk/idxd_spec.h 00:03:45.056 TEST_HEADER include/spdk/init.h 00:03:45.056 TEST_HEADER include/spdk/ioat.h 00:03:45.056 TEST_HEADER include/spdk/ioat_spec.h 00:03:45.056 TEST_HEADER include/spdk/iscsi_spec.h 00:03:45.056 TEST_HEADER include/spdk/json.h 00:03:45.056 TEST_HEADER include/spdk/jsonrpc.h 00:03:45.056 TEST_HEADER include/spdk/keyring.h 00:03:45.056 TEST_HEADER include/spdk/keyring_module.h 00:03:45.056 TEST_HEADER include/spdk/likely.h 00:03:45.056 TEST_HEADER include/spdk/log.h 00:03:45.056 TEST_HEADER include/spdk/lvol.h 00:03:45.056 LINK spdk_nvme_identify 00:03:45.056 TEST_HEADER include/spdk/memory.h 00:03:45.056 TEST_HEADER include/spdk/mmio.h 00:03:45.056 TEST_HEADER include/spdk/nbd.h 00:03:45.056 TEST_HEADER include/spdk/net.h 00:03:45.056 TEST_HEADER include/spdk/notify.h 00:03:45.056 TEST_HEADER include/spdk/nvme.h 00:03:45.056 TEST_HEADER include/spdk/nvme_intel.h 00:03:45.056 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:45.056 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:45.056 TEST_HEADER include/spdk/nvme_spec.h 00:03:45.056 TEST_HEADER include/spdk/nvme_zns.h 00:03:45.056 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:45.056 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:45.056 TEST_HEADER include/spdk/nvmf.h 00:03:45.056 TEST_HEADER include/spdk/nvmf_spec.h 00:03:45.056 TEST_HEADER include/spdk/nvmf_transport.h 00:03:45.056 TEST_HEADER include/spdk/opal.h 00:03:45.056 TEST_HEADER include/spdk/opal_spec.h 00:03:45.056 TEST_HEADER include/spdk/pci_ids.h 00:03:45.056 TEST_HEADER include/spdk/pipe.h 00:03:45.056 TEST_HEADER include/spdk/queue.h 00:03:45.056 TEST_HEADER include/spdk/reduce.h 00:03:45.056 TEST_HEADER include/spdk/rpc.h 00:03:45.056 TEST_HEADER include/spdk/scheduler.h 00:03:45.056 TEST_HEADER include/spdk/scsi.h 00:03:45.056 CC examples/accel/perf/accel_perf.o 00:03:45.056 TEST_HEADER include/spdk/scsi_spec.h 00:03:45.056 TEST_HEADER include/spdk/sock.h 00:03:45.056 TEST_HEADER include/spdk/stdinc.h 00:03:45.056 TEST_HEADER include/spdk/string.h 00:03:45.056 TEST_HEADER include/spdk/thread.h 00:03:45.057 TEST_HEADER include/spdk/trace.h 00:03:45.057 TEST_HEADER include/spdk/trace_parser.h 00:03:45.057 TEST_HEADER include/spdk/tree.h 00:03:45.057 TEST_HEADER include/spdk/ublk.h 00:03:45.057 TEST_HEADER include/spdk/util.h 00:03:45.057 TEST_HEADER include/spdk/uuid.h 00:03:45.057 TEST_HEADER include/spdk/version.h 00:03:45.057 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:45.057 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:45.057 LINK led 00:03:45.057 TEST_HEADER include/spdk/vhost.h 00:03:45.057 TEST_HEADER include/spdk/vmd.h 00:03:45.057 TEST_HEADER include/spdk/xor.h 00:03:45.057 TEST_HEADER include/spdk/zipf.h 00:03:45.057 CXX test/cpp_headers/accel.o 00:03:45.315 LINK idxd_perf 00:03:45.315 CC examples/blob/hello_world/hello_blob.o 00:03:45.315 CXX test/cpp_headers/accel_module.o 00:03:45.315 LINK nvme_fuzz 00:03:45.585 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:45.585 CC examples/nvme/hello_world/hello_world.o 00:03:45.585 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:45.585 CXX test/cpp_headers/assert.o 00:03:45.585 LINK spdk_top 00:03:45.585 CC examples/nvme/reconnect/reconnect.o 00:03:45.585 LINK accel_perf 00:03:45.585 LINK hello_blob 00:03:45.856 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:46.112 CXX test/cpp_headers/barrier.o 00:03:46.112 LINK hello_world 00:03:46.112 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:46.112 CC examples/blob/cli/blobcli.o 00:03:46.112 CXX test/cpp_headers/base64.o 00:03:46.112 LINK reconnect 00:03:46.368 CC app/vhost/vhost.o 00:03:46.368 CC examples/nvme/arbitration/arbitration.o 00:03:46.368 LINK vhost_fuzz 00:03:46.368 CC examples/bdev/hello_world/hello_bdev.o 00:03:46.625 CXX test/cpp_headers/bdev.o 00:03:46.625 LINK vhost 00:03:46.625 CC examples/bdev/bdevperf/bdevperf.o 00:03:46.625 CXX test/cpp_headers/bdev_module.o 00:03:46.625 LINK hello_bdev 00:03:46.883 LINK arbitration 00:03:46.883 LINK nvme_manage 00:03:46.883 LINK blobcli 00:03:46.883 CXX test/cpp_headers/bdev_zone.o 00:03:46.883 CC test/env/mem_callbacks/mem_callbacks.o 00:03:47.140 CC app/spdk_dd/spdk_dd.o 00:03:47.140 CXX test/cpp_headers/bit_array.o 00:03:47.396 CC examples/nvme/hotplug/hotplug.o 00:03:47.397 CC test/env/vtophys/vtophys.o 00:03:47.397 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:47.397 CC test/event/event_perf/event_perf.o 00:03:47.698 CXX test/cpp_headers/bit_pool.o 00:03:47.698 LINK iscsi_fuzz 00:03:47.698 LINK vtophys 00:03:47.698 LINK event_perf 00:03:47.698 LINK bdevperf 00:03:47.955 LINK hotplug 00:03:47.955 LINK env_dpdk_post_init 00:03:47.955 LINK spdk_dd 00:03:47.955 LINK mem_callbacks 00:03:47.955 CXX test/cpp_headers/blob_bdev.o 00:03:47.955 CXX test/cpp_headers/blobfs_bdev.o 00:03:48.212 CC test/app/histogram_perf/histogram_perf.o 00:03:48.212 CXX test/cpp_headers/blobfs.o 00:03:48.212 CC test/event/reactor/reactor.o 00:03:48.212 CC examples/nvme/abort/abort.o 00:03:48.212 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:48.212 LINK histogram_perf 00:03:48.212 CC test/env/memory/memory_ut.o 00:03:48.212 LINK reactor 00:03:48.212 CC test/env/pci/pci_ut.o 00:03:48.469 CXX test/cpp_headers/blob.o 00:03:48.469 LINK cmb_copy 00:03:48.469 CC test/event/reactor_perf/reactor_perf.o 00:03:48.469 CC test/app/jsoncat/jsoncat.o 00:03:48.469 CC app/fio/nvme/fio_plugin.o 00:03:48.726 CXX test/cpp_headers/conf.o 00:03:48.726 LINK abort 00:03:48.726 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:48.726 LINK reactor_perf 00:03:48.983 LINK pci_ut 00:03:48.983 LINK jsoncat 00:03:48.983 CXX test/cpp_headers/config.o 00:03:48.983 CXX test/cpp_headers/cpuset.o 00:03:48.983 CC test/nvme/aer/aer.o 00:03:49.240 LINK pmr_persistence 00:03:49.240 CXX test/cpp_headers/crc16.o 00:03:49.240 CC test/event/app_repeat/app_repeat.o 00:03:49.498 CC test/rpc_client/rpc_client_test.o 00:03:49.498 CC test/app/stub/stub.o 00:03:49.498 LINK aer 00:03:49.498 CXX test/cpp_headers/crc32.o 00:03:49.498 LINK app_repeat 00:03:49.757 LINK stub 00:03:49.757 CC test/nvme/reset/reset.o 00:03:49.757 LINK memory_ut 00:03:49.757 CXX test/cpp_headers/crc64.o 00:03:49.757 LINK rpc_client_test 00:03:49.757 CC app/fio/bdev/fio_plugin.o 00:03:50.015 CXX test/cpp_headers/dif.o 00:03:50.015 LINK spdk_nvme 00:03:50.015 CXX test/cpp_headers/dma.o 00:03:50.015 CC test/event/scheduler/scheduler.o 00:03:50.272 LINK reset 00:03:50.272 CC test/accel/dif/dif.o 00:03:50.272 CXX test/cpp_headers/endian.o 00:03:50.530 CC test/nvme/sgl/sgl.o 00:03:50.530 LINK scheduler 00:03:50.530 CC examples/nvmf/nvmf/nvmf.o 00:03:50.530 LINK spdk_bdev 00:03:50.530 CXX test/cpp_headers/env_dpdk.o 00:03:50.530 CC test/blobfs/mkfs/mkfs.o 00:03:50.787 CXX test/cpp_headers/env.o 00:03:50.787 CC test/lvol/esnap/esnap.o 00:03:50.787 CC test/nvme/e2edp/nvme_dp.o 00:03:50.787 CXX test/cpp_headers/event.o 00:03:50.787 CC test/nvme/overhead/overhead.o 00:03:50.787 LINK sgl 00:03:50.787 LINK dif 00:03:50.787 LINK mkfs 00:03:51.044 CXX test/cpp_headers/fd_group.o 00:03:51.044 CXX test/cpp_headers/fd.o 00:03:51.044 CXX test/cpp_headers/file.o 00:03:51.044 LINK nvmf 00:03:51.044 LINK overhead 00:03:51.044 LINK nvme_dp 00:03:51.301 CXX test/cpp_headers/ftl.o 00:03:51.301 CC test/nvme/err_injection/err_injection.o 00:03:51.301 CC test/nvme/startup/startup.o 00:03:51.301 CC test/nvme/reserve/reserve.o 00:03:51.301 CC test/nvme/simple_copy/simple_copy.o 00:03:51.301 CXX test/cpp_headers/gpt_spec.o 00:03:51.301 CC test/nvme/connect_stress/connect_stress.o 00:03:51.559 CC test/nvme/boot_partition/boot_partition.o 00:03:51.559 LINK err_injection 00:03:51.559 LINK startup 00:03:51.559 LINK reserve 00:03:51.559 LINK simple_copy 00:03:51.559 CC test/nvme/compliance/nvme_compliance.o 00:03:51.559 LINK boot_partition 00:03:51.559 CXX test/cpp_headers/hexlify.o 00:03:51.559 LINK connect_stress 00:03:51.817 CXX test/cpp_headers/histogram_data.o 00:03:51.817 CC test/nvme/fused_ordering/fused_ordering.o 00:03:51.817 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:51.817 CC test/nvme/fdp/fdp.o 00:03:51.817 CC test/nvme/cuse/cuse.o 00:03:51.817 CXX test/cpp_headers/idxd.o 00:03:52.074 CXX test/cpp_headers/idxd_spec.o 00:03:52.074 LINK nvme_compliance 00:03:52.074 CXX test/cpp_headers/init.o 00:03:52.074 CC test/bdev/bdevio/bdevio.o 00:03:52.074 LINK fused_ordering 00:03:52.333 LINK doorbell_aers 00:03:52.333 LINK fdp 00:03:52.333 CXX test/cpp_headers/ioat.o 00:03:52.333 CXX test/cpp_headers/ioat_spec.o 00:03:52.591 CXX test/cpp_headers/iscsi_spec.o 00:03:52.591 CXX test/cpp_headers/json.o 00:03:52.591 CXX test/cpp_headers/jsonrpc.o 00:03:52.591 CXX test/cpp_headers/keyring.o 00:03:52.591 CXX test/cpp_headers/keyring_module.o 00:03:52.591 CXX test/cpp_headers/likely.o 00:03:52.848 CXX test/cpp_headers/log.o 00:03:52.848 CXX test/cpp_headers/lvol.o 00:03:52.848 CXX test/cpp_headers/memory.o 00:03:52.848 LINK bdevio 00:03:52.848 CXX test/cpp_headers/mmio.o 00:03:52.848 CXX test/cpp_headers/nbd.o 00:03:52.848 CXX test/cpp_headers/net.o 00:03:52.848 CXX test/cpp_headers/notify.o 00:03:52.848 CXX test/cpp_headers/nvme.o 00:03:52.848 CXX test/cpp_headers/nvme_intel.o 00:03:53.107 CXX test/cpp_headers/nvme_ocssd.o 00:03:53.107 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:53.107 CXX test/cpp_headers/nvme_spec.o 00:03:53.107 CXX test/cpp_headers/nvme_zns.o 00:03:53.107 CXX test/cpp_headers/nvmf_cmd.o 00:03:53.107 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:53.107 CXX test/cpp_headers/nvmf.o 00:03:53.107 CXX test/cpp_headers/nvmf_spec.o 00:03:53.366 CXX test/cpp_headers/nvmf_transport.o 00:03:53.366 CXX test/cpp_headers/opal.o 00:03:53.366 CXX test/cpp_headers/opal_spec.o 00:03:53.366 CXX test/cpp_headers/pci_ids.o 00:03:53.366 CXX test/cpp_headers/pipe.o 00:03:53.366 CXX test/cpp_headers/queue.o 00:03:53.366 LINK cuse 00:03:53.366 CXX test/cpp_headers/reduce.o 00:03:53.366 CXX test/cpp_headers/rpc.o 00:03:53.366 CXX test/cpp_headers/scheduler.o 00:03:53.366 CXX test/cpp_headers/scsi.o 00:03:53.366 CXX test/cpp_headers/scsi_spec.o 00:03:53.366 CXX test/cpp_headers/sock.o 00:03:53.625 CXX test/cpp_headers/stdinc.o 00:03:53.625 CXX test/cpp_headers/string.o 00:03:53.625 CXX test/cpp_headers/thread.o 00:03:53.625 CXX test/cpp_headers/trace.o 00:03:53.625 CXX test/cpp_headers/trace_parser.o 00:03:53.625 CXX test/cpp_headers/tree.o 00:03:53.625 CXX test/cpp_headers/ublk.o 00:03:53.625 CXX test/cpp_headers/util.o 00:03:53.625 CXX test/cpp_headers/uuid.o 00:03:53.625 CXX test/cpp_headers/version.o 00:03:53.883 CXX test/cpp_headers/vfio_user_pci.o 00:03:53.883 CXX test/cpp_headers/vfio_user_spec.o 00:03:53.883 CXX test/cpp_headers/vhost.o 00:03:53.883 CXX test/cpp_headers/vmd.o 00:03:53.883 CXX test/cpp_headers/xor.o 00:03:53.883 CXX test/cpp_headers/zipf.o 00:03:56.411 LINK esnap 00:03:56.668 ************************************ 00:03:56.668 END TEST make 00:03:56.668 ************************************ 00:03:56.668 00:03:56.668 real 1m27.861s 00:03:56.668 user 9m46.632s 00:03:56.668 sys 1m58.306s 00:03:56.668 21:58:43 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:56.668 21:58:43 make -- common/autotest_common.sh@10 -- $ set +x 00:03:56.668 21:58:43 -- common/autotest_common.sh@1142 -- $ return 0 00:03:56.668 21:58:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:56.668 21:58:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:56.668 21:58:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:56.668 21:58:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:56.668 21:58:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:56.668 21:58:43 -- pm/common@44 -- $ pid=5211 00:03:56.668 21:58:43 -- pm/common@50 -- $ kill -TERM 5211 00:03:56.668 21:58:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:56.668 21:58:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:56.668 21:58:43 -- pm/common@44 -- $ pid=5213 00:03:56.668 21:58:43 -- pm/common@50 -- $ kill -TERM 5213 00:03:56.926 21:58:43 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:56.926 21:58:43 -- nvmf/common.sh@7 -- # uname -s 00:03:56.926 21:58:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:56.926 21:58:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:56.926 21:58:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:56.926 21:58:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:56.926 21:58:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:56.926 21:58:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:56.926 21:58:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:56.926 21:58:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:56.926 21:58:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:56.926 21:58:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:56.926 21:58:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:03:56.926 21:58:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:03:56.926 21:58:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:56.926 21:58:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:56.926 21:58:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:56.926 21:58:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:56.926 21:58:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:56.926 21:58:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:56.926 21:58:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:56.926 21:58:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:56.927 21:58:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.927 21:58:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.927 21:58:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.927 21:58:43 -- paths/export.sh@5 -- # export PATH 00:03:56.927 21:58:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.927 21:58:43 -- nvmf/common.sh@47 -- # : 0 00:03:56.927 21:58:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:56.927 21:58:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:56.927 21:58:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:56.927 21:58:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:56.927 21:58:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:56.927 21:58:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:56.927 21:58:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:56.927 21:58:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:56.927 21:58:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:56.927 21:58:43 -- spdk/autotest.sh@32 -- # uname -s 00:03:56.927 21:58:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:56.927 21:58:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:56.927 21:58:43 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:56.927 21:58:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:56.927 21:58:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:56.927 21:58:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:56.927 21:58:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:56.927 21:58:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:56.927 21:58:43 -- spdk/autotest.sh@48 -- # udevadm_pid=54786 00:03:56.927 21:58:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:56.927 21:58:43 -- pm/common@17 -- # local monitor 00:03:56.927 21:58:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:56.927 21:58:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:56.927 21:58:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:56.927 21:58:43 -- pm/common@25 -- # sleep 1 00:03:56.927 21:58:43 -- pm/common@21 -- # date +%s 00:03:56.927 21:58:43 -- pm/common@21 -- # date +%s 00:03:56.927 21:58:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721080723 00:03:56.927 21:58:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721080723 00:03:56.927 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721080723_collect-vmstat.pm.log 00:03:56.927 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721080723_collect-cpu-load.pm.log 00:03:57.859 21:58:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:57.859 21:58:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:57.859 21:58:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:57.859 21:58:44 -- common/autotest_common.sh@10 -- # set +x 00:03:57.859 21:58:44 -- spdk/autotest.sh@59 -- # create_test_list 00:03:57.859 21:58:44 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:57.859 21:58:44 -- common/autotest_common.sh@10 -- # set +x 00:03:57.859 21:58:44 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:57.859 21:58:44 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:57.859 21:58:44 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:57.859 21:58:44 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:57.859 21:58:44 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:57.859 21:58:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:57.859 21:58:44 -- common/autotest_common.sh@1455 -- # uname 00:03:57.859 21:58:44 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:57.859 21:58:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:57.859 21:58:44 -- common/autotest_common.sh@1475 -- # uname 00:03:57.859 21:58:44 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:57.859 21:58:44 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:57.859 21:58:44 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:57.859 21:58:44 -- spdk/autotest.sh@72 -- # hash lcov 00:03:57.859 21:58:44 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:57.859 21:58:44 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:57.859 --rc lcov_branch_coverage=1 00:03:57.859 --rc lcov_function_coverage=1 00:03:57.859 --rc genhtml_branch_coverage=1 00:03:57.859 --rc genhtml_function_coverage=1 00:03:57.859 --rc genhtml_legend=1 00:03:57.859 --rc geninfo_all_blocks=1 00:03:57.859 ' 00:03:57.859 21:58:44 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:57.859 --rc lcov_branch_coverage=1 00:03:57.859 --rc lcov_function_coverage=1 00:03:57.859 --rc genhtml_branch_coverage=1 00:03:57.859 --rc genhtml_function_coverage=1 00:03:57.859 --rc genhtml_legend=1 00:03:57.859 --rc geninfo_all_blocks=1 00:03:57.859 ' 00:03:57.859 21:58:44 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:57.859 --rc lcov_branch_coverage=1 00:03:57.859 --rc lcov_function_coverage=1 00:03:57.859 --rc genhtml_branch_coverage=1 00:03:57.859 --rc genhtml_function_coverage=1 00:03:57.859 --rc genhtml_legend=1 00:03:57.859 --rc geninfo_all_blocks=1 00:03:57.859 --no-external' 00:03:57.859 21:58:44 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:57.859 --rc lcov_branch_coverage=1 00:03:57.859 --rc lcov_function_coverage=1 00:03:57.859 --rc genhtml_branch_coverage=1 00:03:57.859 --rc genhtml_function_coverage=1 00:03:57.859 --rc genhtml_legend=1 00:03:57.859 --rc geninfo_all_blocks=1 00:03:57.859 --no-external' 00:03:57.859 21:58:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:58.117 lcov: LCOV version 1.14 00:03:58.117 21:58:44 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:16.192 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:16.193 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:28.393 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:28.393 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:28.394 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:28.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:31.678 21:59:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:31.678 21:59:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:31.678 21:59:18 -- common/autotest_common.sh@10 -- # set +x 00:04:31.678 21:59:18 -- spdk/autotest.sh@91 -- # rm -f 00:04:31.678 21:59:18 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:31.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.935 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:32.194 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:32.194 21:59:18 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:32.194 21:59:18 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:32.194 21:59:18 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:32.194 21:59:18 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:32.194 21:59:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:32.194 21:59:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:32.194 21:59:18 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:32.194 21:59:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:32.194 21:59:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:32.194 21:59:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:32.194 21:59:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:32.194 21:59:18 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:32.194 21:59:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:32.194 21:59:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:32.194 21:59:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:32.194 21:59:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:32.194 21:59:18 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:32.194 21:59:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:32.194 21:59:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:32.194 21:59:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:32.194 21:59:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:32.194 21:59:18 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:32.194 21:59:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:32.194 21:59:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:32.194 21:59:18 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:32.194 21:59:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:32.194 21:59:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:32.194 21:59:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:32.194 21:59:18 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:32.194 21:59:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:32.194 No valid GPT data, bailing 00:04:32.194 21:59:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:32.194 21:59:18 -- scripts/common.sh@391 -- # pt= 00:04:32.194 21:59:18 -- scripts/common.sh@392 -- # return 1 00:04:32.194 21:59:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:32.194 1+0 records in 00:04:32.194 1+0 records out 00:04:32.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00396802 s, 264 MB/s 00:04:32.195 21:59:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:32.195 21:59:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:32.195 21:59:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:32.195 21:59:18 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:32.195 21:59:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:32.195 No valid GPT data, bailing 00:04:32.195 21:59:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:32.195 21:59:19 -- scripts/common.sh@391 -- # pt= 00:04:32.195 21:59:19 -- scripts/common.sh@392 -- # return 1 00:04:32.195 21:59:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:32.195 1+0 records in 00:04:32.195 1+0 records out 00:04:32.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0038441 s, 273 MB/s 00:04:32.195 21:59:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:32.195 21:59:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:32.195 21:59:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:32.195 21:59:19 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:32.195 21:59:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:32.195 No valid GPT data, bailing 00:04:32.453 21:59:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:32.453 21:59:19 -- scripts/common.sh@391 -- # pt= 00:04:32.453 21:59:19 -- scripts/common.sh@392 -- # return 1 00:04:32.453 21:59:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:32.453 1+0 records in 00:04:32.453 1+0 records out 00:04:32.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00425622 s, 246 MB/s 00:04:32.453 21:59:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:32.453 21:59:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:32.453 21:59:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:32.453 21:59:19 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:32.453 21:59:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:32.453 No valid GPT data, bailing 00:04:32.453 21:59:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:32.453 21:59:19 -- scripts/common.sh@391 -- # pt= 00:04:32.453 21:59:19 -- scripts/common.sh@392 -- # return 1 00:04:32.453 21:59:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:32.453 1+0 records in 00:04:32.453 1+0 records out 00:04:32.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00349918 s, 300 MB/s 00:04:32.453 21:59:19 -- spdk/autotest.sh@118 -- # sync 00:04:32.453 21:59:19 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:32.453 21:59:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:32.453 21:59:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:34.358 21:59:20 -- spdk/autotest.sh@124 -- # uname -s 00:04:34.358 21:59:20 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:34.358 21:59:20 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:34.358 21:59:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.358 21:59:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.358 21:59:20 -- common/autotest_common.sh@10 -- # set +x 00:04:34.358 ************************************ 00:04:34.358 START TEST setup.sh 00:04:34.358 ************************************ 00:04:34.358 21:59:20 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:34.358 * Looking for test storage... 00:04:34.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:34.358 21:59:21 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:34.358 21:59:21 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:34.358 21:59:21 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:34.358 21:59:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.358 21:59:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.358 21:59:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:34.358 ************************************ 00:04:34.358 START TEST acl 00:04:34.358 ************************************ 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:34.358 * Looking for test storage... 00:04:34.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:34.358 21:59:21 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:34.358 21:59:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.358 21:59:21 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:34.358 21:59:21 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:34.358 21:59:21 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:34.358 21:59:21 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:34.358 21:59:21 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:34.358 21:59:21 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.358 21:59:21 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:34.925 21:59:21 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:34.925 21:59:21 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:34.925 21:59:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:34.925 21:59:21 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:34.925 21:59:21 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.925 21:59:21 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:35.493 21:59:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:35.493 21:59:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:35.493 21:59:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.493 Hugepages 00:04:35.493 node hugesize free / total 00:04:35.493 21:59:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:35.493 21:59:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:35.493 21:59:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.493 00:04:35.493 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:35.493 21:59:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:35.493 21:59:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:35.493 21:59:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:35.751 21:59:22 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:35.751 21:59:22 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.751 21:59:22 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.751 21:59:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:35.751 ************************************ 00:04:35.751 START TEST denied 00:04:35.751 ************************************ 00:04:35.751 21:59:22 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:35.751 21:59:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:35.751 21:59:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:35.751 21:59:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.751 21:59:22 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:35.751 21:59:22 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.687 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:36.687 21:59:23 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:36.687 21:59:23 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:36.687 21:59:23 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:36.687 21:59:23 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:36.687 21:59:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:36.687 21:59:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:36.687 21:59:23 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:36.687 21:59:23 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:36.687 21:59:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.687 21:59:23 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.251 00:04:37.251 real 0m1.314s 00:04:37.251 user 0m0.532s 00:04:37.251 sys 0m0.738s 00:04:37.251 ************************************ 00:04:37.251 END TEST denied 00:04:37.251 ************************************ 00:04:37.251 21:59:23 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.251 21:59:23 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:37.251 21:59:23 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:37.251 21:59:23 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:37.251 21:59:23 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.251 21:59:23 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.251 21:59:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:37.251 ************************************ 00:04:37.251 START TEST allowed 00:04:37.251 ************************************ 00:04:37.251 21:59:23 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:37.251 21:59:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:37.251 21:59:23 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:37.251 21:59:23 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.251 21:59:23 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:37.251 21:59:23 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:37.820 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.820 21:59:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:37.820 21:59:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:37.820 21:59:24 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:37.820 21:59:24 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:37.820 21:59:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:37.820 21:59:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:37.820 21:59:24 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:37.820 21:59:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:37.820 21:59:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.820 21:59:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.751 00:04:38.752 real 0m1.384s 00:04:38.752 user 0m0.622s 00:04:38.752 sys 0m0.767s 00:04:38.752 21:59:25 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.752 21:59:25 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:38.752 ************************************ 00:04:38.752 END TEST allowed 00:04:38.752 ************************************ 00:04:38.752 21:59:25 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:38.752 ************************************ 00:04:38.752 END TEST acl 00:04:38.752 ************************************ 00:04:38.752 00:04:38.752 real 0m4.321s 00:04:38.752 user 0m1.918s 00:04:38.752 sys 0m2.382s 00:04:38.752 21:59:25 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.752 21:59:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:38.752 21:59:25 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:38.752 21:59:25 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:38.752 21:59:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.752 21:59:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.752 21:59:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.752 ************************************ 00:04:38.752 START TEST hugepages 00:04:38.752 ************************************ 00:04:38.752 21:59:25 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:38.752 * Looking for test storage... 00:04:38.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5893368 kB' 'MemAvailable: 7405272 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 477092 kB' 'Inactive: 1352980 kB' 'Active(anon): 114892 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 106084 kB' 'Mapped: 48584 kB' 'Shmem: 10488 kB' 'KReclaimable: 67324 kB' 'Slab: 141176 kB' 'SReclaimable: 67324 kB' 'SUnreclaim: 73852 kB' 'KernelStack: 6640 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 336768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.752 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:38.753 21:59:25 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:38.753 21:59:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.753 21:59:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.753 21:59:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:38.753 ************************************ 00:04:38.753 START TEST default_setup 00:04:38.753 ************************************ 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.753 21:59:25 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.320 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.320 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8008536 kB' 'MemAvailable: 9520248 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 494552 kB' 'Inactive: 1352992 kB' 'Active(anon): 132352 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123204 kB' 'Mapped: 48668 kB' 'Shmem: 10468 kB' 'KReclaimable: 66920 kB' 'Slab: 140704 kB' 'SReclaimable: 66920 kB' 'SUnreclaim: 73784 kB' 'KernelStack: 6512 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.582 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.583 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8008036 kB' 'MemAvailable: 9519736 kB' 'Buffers: 2436 kB' 'Cached: 1723228 kB' 'SwapCached: 0 kB' 'Active: 494112 kB' 'Inactive: 1352996 kB' 'Active(anon): 131912 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123064 kB' 'Mapped: 48548 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140660 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73776 kB' 'KernelStack: 6496 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.584 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8008036 kB' 'MemAvailable: 9519736 kB' 'Buffers: 2436 kB' 'Cached: 1723228 kB' 'SwapCached: 0 kB' 'Active: 494076 kB' 'Inactive: 1352996 kB' 'Active(anon): 131876 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122984 kB' 'Mapped: 48548 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140660 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73776 kB' 'KernelStack: 6480 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.585 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.586 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:39.587 nr_hugepages=1024 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.587 resv_hugepages=0 00:04:39.587 surplus_hugepages=0 00:04:39.587 anon_hugepages=0 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.587 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8008036 kB' 'MemAvailable: 9519736 kB' 'Buffers: 2436 kB' 'Cached: 1723228 kB' 'SwapCached: 0 kB' 'Active: 494072 kB' 'Inactive: 1352996 kB' 'Active(anon): 131872 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122972 kB' 'Mapped: 48548 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140660 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73776 kB' 'KernelStack: 6480 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.588 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.589 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8008036 kB' 'MemUsed: 4233940 kB' 'SwapCached: 0 kB' 'Active: 494124 kB' 'Inactive: 1352996 kB' 'Active(anon): 131924 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1352996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1725664 kB' 'Mapped: 48548 kB' 'AnonPages: 123072 kB' 'Shmem: 10464 kB' 'KernelStack: 6496 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66884 kB' 'Slab: 140672 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.590 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.591 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.592 node0=1024 expecting 1024 00:04:39.592 ************************************ 00:04:39.592 END TEST default_setup 00:04:39.592 ************************************ 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:39.592 00:04:39.592 real 0m0.911s 00:04:39.592 user 0m0.437s 00:04:39.592 sys 0m0.414s 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.592 21:59:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:39.592 21:59:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:39.592 21:59:26 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:39.592 21:59:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.592 21:59:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.592 21:59:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.592 ************************************ 00:04:39.592 START TEST per_node_1G_alloc 00:04:39.592 ************************************ 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:39.592 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:39.593 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:39.593 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:39.593 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.593 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.902 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.902 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:39.902 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:39.902 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:39.902 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:39.902 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.902 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.902 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.902 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.902 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.902 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.902 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.902 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.902 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.902 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.902 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.902 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9048808 kB' 'MemAvailable: 10560512 kB' 'Buffers: 2436 kB' 'Cached: 1723228 kB' 'SwapCached: 0 kB' 'Active: 494168 kB' 'Inactive: 1353000 kB' 'Active(anon): 131968 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123352 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140664 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73780 kB' 'KernelStack: 6500 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.903 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.164 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.164 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.164 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.164 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.164 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.164 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.164 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.164 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.164 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.164 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.164 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.164 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.164 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.165 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9048808 kB' 'MemAvailable: 10560512 kB' 'Buffers: 2436 kB' 'Cached: 1723228 kB' 'SwapCached: 0 kB' 'Active: 493856 kB' 'Inactive: 1353000 kB' 'Active(anon): 131656 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123016 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140664 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73780 kB' 'KernelStack: 6468 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.166 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.167 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9048808 kB' 'MemAvailable: 10560512 kB' 'Buffers: 2436 kB' 'Cached: 1723228 kB' 'SwapCached: 0 kB' 'Active: 493876 kB' 'Inactive: 1353000 kB' 'Active(anon): 131676 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123056 kB' 'Mapped: 48540 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140664 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73780 kB' 'KernelStack: 6496 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.168 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.169 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:40.170 nr_hugepages=512 00:04:40.170 resv_hugepages=0 00:04:40.170 surplus_hugepages=0 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.170 anon_hugepages=0 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9048808 kB' 'MemAvailable: 10560512 kB' 'Buffers: 2436 kB' 'Cached: 1723228 kB' 'SwapCached: 0 kB' 'Active: 493908 kB' 'Inactive: 1353000 kB' 'Active(anon): 131708 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123080 kB' 'Mapped: 48540 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140664 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73780 kB' 'KernelStack: 6480 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.170 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.171 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9048808 kB' 'MemUsed: 3193168 kB' 'SwapCached: 0 kB' 'Active: 493796 kB' 'Inactive: 1353000 kB' 'Active(anon): 131596 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1725664 kB' 'Mapped: 48540 kB' 'AnonPages: 122968 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66884 kB' 'Slab: 140664 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.172 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.173 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.173 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.173 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.173 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.173 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.174 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.174 node0=512 expecting 512 00:04:40.174 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.174 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.174 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.174 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:40.174 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:40.174 00:04:40.174 real 0m0.509s 00:04:40.174 user 0m0.232s 00:04:40.174 sys 0m0.287s 00:04:40.174 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.174 ************************************ 00:04:40.174 END TEST per_node_1G_alloc 00:04:40.174 ************************************ 00:04:40.174 21:59:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:40.174 21:59:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:40.174 21:59:27 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:40.174 21:59:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.174 21:59:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.174 21:59:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.174 ************************************ 00:04:40.174 START TEST even_2G_alloc 00:04:40.174 ************************************ 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.174 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.432 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.694 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.694 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7998612 kB' 'MemAvailable: 9510320 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 494324 kB' 'Inactive: 1353004 kB' 'Active(anon): 132124 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123492 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140680 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73796 kB' 'KernelStack: 6452 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.694 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.695 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7998612 kB' 'MemAvailable: 9510320 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 493832 kB' 'Inactive: 1353004 kB' 'Active(anon): 131632 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123008 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140700 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73816 kB' 'KernelStack: 6480 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.696 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.697 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7998612 kB' 'MemAvailable: 9510320 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 493888 kB' 'Inactive: 1353004 kB' 'Active(anon): 131688 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123092 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140700 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73816 kB' 'KernelStack: 6480 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.701 nr_hugepages=1024 00:04:40.701 resv_hugepages=0 00:04:40.701 surplus_hugepages=0 00:04:40.701 anon_hugepages=0 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7998612 kB' 'MemAvailable: 9510320 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 494024 kB' 'Inactive: 1353004 kB' 'Active(anon): 131824 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122992 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140692 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73808 kB' 'KernelStack: 6496 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.701 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7998612 kB' 'MemUsed: 4243364 kB' 'SwapCached: 0 kB' 'Active: 494112 kB' 'Inactive: 1353004 kB' 'Active(anon): 131912 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1725668 kB' 'Mapped: 48536 kB' 'AnonPages: 122988 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66884 kB' 'Slab: 140696 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.704 node0=1024 expecting 1024 00:04:40.704 ************************************ 00:04:40.704 END TEST even_2G_alloc 00:04:40.704 ************************************ 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:40.704 00:04:40.704 real 0m0.575s 00:04:40.704 user 0m0.280s 00:04:40.704 sys 0m0.286s 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.704 21:59:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:40.962 21:59:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:40.962 21:59:27 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:40.962 21:59:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.962 21:59:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.962 21:59:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.962 ************************************ 00:04:40.962 START TEST odd_alloc 00:04:40.962 ************************************ 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.962 21:59:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.224 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.224 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7993576 kB' 'MemAvailable: 9505284 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 494576 kB' 'Inactive: 1353004 kB' 'Active(anon): 132376 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123532 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140712 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73828 kB' 'KernelStack: 6468 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.224 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.225 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7993576 kB' 'MemAvailable: 9505284 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 494136 kB' 'Inactive: 1353004 kB' 'Active(anon): 131936 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123056 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140704 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73820 kB' 'KernelStack: 6480 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.226 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7999228 kB' 'MemAvailable: 9510936 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 494196 kB' 'Inactive: 1353004 kB' 'Active(anon): 131996 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123220 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140696 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73812 kB' 'KernelStack: 6512 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.227 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.228 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.229 nr_hugepages=1025 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:41.229 resv_hugepages=0 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.229 surplus_hugepages=0 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.229 anon_hugepages=0 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7999228 kB' 'MemAvailable: 9510936 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 493948 kB' 'Inactive: 1353004 kB' 'Active(anon): 131748 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122956 kB' 'Mapped: 48536 kB' 'Shmem: 10464 kB' 'KReclaimable: 66884 kB' 'Slab: 140692 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73808 kB' 'KernelStack: 6528 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.229 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.230 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7999228 kB' 'MemUsed: 4242748 kB' 'SwapCached: 0 kB' 'Active: 494132 kB' 'Inactive: 1353004 kB' 'Active(anon): 131932 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1725668 kB' 'Mapped: 48536 kB' 'AnonPages: 123112 kB' 'Shmem: 10464 kB' 'KernelStack: 6496 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66884 kB' 'Slab: 140692 kB' 'SReclaimable: 66884 kB' 'SUnreclaim: 73808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.231 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.232 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.232 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.232 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.232 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.490 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.491 node0=1025 expecting 1025 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:41.491 00:04:41.491 real 0m0.501s 00:04:41.491 user 0m0.261s 00:04:41.491 sys 0m0.267s 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.491 21:59:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.491 ************************************ 00:04:41.491 END TEST odd_alloc 00:04:41.491 ************************************ 00:04:41.491 21:59:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:41.491 21:59:28 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:41.491 21:59:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.491 21:59:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.491 21:59:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.491 ************************************ 00:04:41.491 START TEST custom_alloc 00:04:41.491 ************************************ 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.491 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.751 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.751 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9049288 kB' 'MemAvailable: 10560980 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 494348 kB' 'Inactive: 1353004 kB' 'Active(anon): 132148 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123520 kB' 'Mapped: 48496 kB' 'Shmem: 10464 kB' 'KReclaimable: 66852 kB' 'Slab: 140624 kB' 'SReclaimable: 66852 kB' 'SUnreclaim: 73772 kB' 'KernelStack: 6468 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.751 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.752 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9049288 kB' 'MemAvailable: 10560980 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 493868 kB' 'Inactive: 1353004 kB' 'Active(anon): 131668 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123048 kB' 'Mapped: 48540 kB' 'Shmem: 10464 kB' 'KReclaimable: 66852 kB' 'Slab: 140664 kB' 'SReclaimable: 66852 kB' 'SUnreclaim: 73812 kB' 'KernelStack: 6480 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.753 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.754 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.755 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9049288 kB' 'MemAvailable: 10560980 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 493840 kB' 'Inactive: 1353004 kB' 'Active(anon): 131640 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123064 kB' 'Mapped: 48540 kB' 'Shmem: 10464 kB' 'KReclaimable: 66852 kB' 'Slab: 140660 kB' 'SReclaimable: 66852 kB' 'SUnreclaim: 73808 kB' 'KernelStack: 6480 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.017 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.018 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.019 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.020 nr_hugepages=512 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:42.020 resv_hugepages=0 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.020 surplus_hugepages=0 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.020 anon_hugepages=0 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9049288 kB' 'MemAvailable: 10560980 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 493844 kB' 'Inactive: 1353004 kB' 'Active(anon): 131644 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123016 kB' 'Mapped: 48540 kB' 'Shmem: 10464 kB' 'KReclaimable: 66852 kB' 'Slab: 140660 kB' 'SReclaimable: 66852 kB' 'SUnreclaim: 73808 kB' 'KernelStack: 6480 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.020 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.021 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.022 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9049288 kB' 'MemUsed: 3192688 kB' 'SwapCached: 0 kB' 'Active: 493892 kB' 'Inactive: 1353004 kB' 'Active(anon): 131692 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1725668 kB' 'Mapped: 48540 kB' 'AnonPages: 123096 kB' 'Shmem: 10464 kB' 'KernelStack: 6496 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66852 kB' 'Slab: 140652 kB' 'SReclaimable: 66852 kB' 'SUnreclaim: 73800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.023 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.024 node0=512 expecting 512 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:42.024 00:04:42.024 real 0m0.566s 00:04:42.024 user 0m0.266s 00:04:42.024 sys 0m0.309s 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.024 21:59:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:42.024 ************************************ 00:04:42.024 END TEST custom_alloc 00:04:42.024 ************************************ 00:04:42.024 21:59:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:42.024 21:59:28 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:42.024 21:59:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.024 21:59:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.024 21:59:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:42.024 ************************************ 00:04:42.024 START TEST no_shrink_alloc 00:04:42.024 ************************************ 00:04:42.024 21:59:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.025 21:59:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.283 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.283 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7996680 kB' 'MemAvailable: 9508372 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 494464 kB' 'Inactive: 1353004 kB' 'Active(anon): 132264 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123416 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 66852 kB' 'Slab: 140620 kB' 'SReclaimable: 66852 kB' 'SUnreclaim: 73768 kB' 'KernelStack: 6436 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.283 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.284 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.546 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7996680 kB' 'MemAvailable: 9508372 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 493960 kB' 'Inactive: 1353004 kB' 'Active(anon): 131760 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122904 kB' 'Mapped: 48548 kB' 'Shmem: 10464 kB' 'KReclaimable: 66852 kB' 'Slab: 140604 kB' 'SReclaimable: 66852 kB' 'SUnreclaim: 73752 kB' 'KernelStack: 6496 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.547 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.548 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7996680 kB' 'MemAvailable: 9508372 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 494208 kB' 'Inactive: 1353004 kB' 'Active(anon): 132008 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123152 kB' 'Mapped: 48548 kB' 'Shmem: 10464 kB' 'KReclaimable: 66852 kB' 'Slab: 140604 kB' 'SReclaimable: 66852 kB' 'SUnreclaim: 73752 kB' 'KernelStack: 6496 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.549 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.550 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.551 nr_hugepages=1024 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.551 resv_hugepages=0 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.551 surplus_hugepages=0 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.551 anon_hugepages=0 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7998064 kB' 'MemAvailable: 9509756 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 493920 kB' 'Inactive: 1353004 kB' 'Active(anon): 131720 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122892 kB' 'Mapped: 48548 kB' 'Shmem: 10464 kB' 'KReclaimable: 66852 kB' 'Slab: 140604 kB' 'SReclaimable: 66852 kB' 'SUnreclaim: 73752 kB' 'KernelStack: 6496 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.551 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.552 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7998064 kB' 'MemUsed: 4243912 kB' 'SwapCached: 0 kB' 'Active: 494124 kB' 'Inactive: 1353004 kB' 'Active(anon): 131924 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1725668 kB' 'Mapped: 48548 kB' 'AnonPages: 123096 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66852 kB' 'Slab: 140604 kB' 'SReclaimable: 66852 kB' 'SUnreclaim: 73752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.553 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.554 node0=1024 expecting 1024 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.554 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.814 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.814 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.814 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.814 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7997792 kB' 'MemAvailable: 9509480 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 489336 kB' 'Inactive: 1353004 kB' 'Active(anon): 127136 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118228 kB' 'Mapped: 47936 kB' 'Shmem: 10464 kB' 'KReclaimable: 66848 kB' 'Slab: 140440 kB' 'SReclaimable: 66848 kB' 'SUnreclaim: 73592 kB' 'KernelStack: 6356 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.814 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.815 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7998464 kB' 'MemAvailable: 9510152 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 489184 kB' 'Inactive: 1353004 kB' 'Active(anon): 126984 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 118076 kB' 'Mapped: 47816 kB' 'Shmem: 10464 kB' 'KReclaimable: 66848 kB' 'Slab: 140436 kB' 'SReclaimable: 66848 kB' 'SUnreclaim: 73588 kB' 'KernelStack: 6336 kB' 'PageTables: 3588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.816 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.078 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.079 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7998212 kB' 'MemAvailable: 9509900 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 489152 kB' 'Inactive: 1353004 kB' 'Active(anon): 126952 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 118056 kB' 'Mapped: 47808 kB' 'Shmem: 10464 kB' 'KReclaimable: 66848 kB' 'Slab: 140432 kB' 'SReclaimable: 66848 kB' 'SUnreclaim: 73584 kB' 'KernelStack: 6368 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.080 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:43.081 nr_hugepages=1024 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.081 resv_hugepages=0 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.081 surplus_hugepages=0 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.081 anon_hugepages=0 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7998212 kB' 'MemAvailable: 9509900 kB' 'Buffers: 2436 kB' 'Cached: 1723232 kB' 'SwapCached: 0 kB' 'Active: 488848 kB' 'Inactive: 1353004 kB' 'Active(anon): 126648 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 118012 kB' 'Mapped: 47808 kB' 'Shmem: 10464 kB' 'KReclaimable: 66848 kB' 'Slab: 140432 kB' 'SReclaimable: 66848 kB' 'SUnreclaim: 73584 kB' 'KernelStack: 6368 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 5068800 kB' 'DirectMap1G: 9437184 kB' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.081 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.082 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7998212 kB' 'MemUsed: 4243764 kB' 'SwapCached: 0 kB' 'Active: 488860 kB' 'Inactive: 1353004 kB' 'Active(anon): 126660 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1353004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1725668 kB' 'Mapped: 47808 kB' 'AnonPages: 118076 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66848 kB' 'Slab: 140432 kB' 'SReclaimable: 66848 kB' 'SUnreclaim: 73584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.083 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.084 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.085 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.085 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.085 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.085 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.085 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.085 node0=1024 expecting 1024 00:04:43.085 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.085 21:59:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.085 00:04:43.085 real 0m1.035s 00:04:43.085 user 0m0.506s 00:04:43.085 sys 0m0.562s 00:04:43.085 21:59:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.085 21:59:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:43.085 ************************************ 00:04:43.085 END TEST no_shrink_alloc 00:04:43.085 ************************************ 00:04:43.085 21:59:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:43.085 21:59:29 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:43.085 21:59:29 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:43.085 21:59:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:43.085 21:59:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.085 21:59:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.085 21:59:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.085 21:59:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.085 21:59:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:43.085 21:59:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:43.085 00:04:43.085 real 0m4.511s 00:04:43.085 user 0m2.127s 00:04:43.085 sys 0m2.371s 00:04:43.085 21:59:29 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.085 21:59:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.085 ************************************ 00:04:43.085 END TEST hugepages 00:04:43.085 ************************************ 00:04:43.085 21:59:29 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:43.085 21:59:29 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:43.085 21:59:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.085 21:59:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.085 21:59:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:43.085 ************************************ 00:04:43.085 START TEST driver 00:04:43.085 ************************************ 00:04:43.085 21:59:29 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:43.356 * Looking for test storage... 00:04:43.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:43.356 21:59:30 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:43.356 21:59:30 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.356 21:59:30 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:43.929 21:59:30 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:43.929 21:59:30 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.929 21:59:30 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.929 21:59:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:43.929 ************************************ 00:04:43.929 START TEST guess_driver 00:04:43.929 ************************************ 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:43.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:43.929 Looking for driver=uio_pci_generic 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.929 21:59:30 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:44.547 21:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:44.547 21:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:44.547 21:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.547 21:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.547 21:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:44.547 21:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.547 21:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.547 21:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:44.547 21:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.824 21:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:44.824 21:59:31 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:44.824 21:59:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.824 21:59:31 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:45.083 00:04:45.083 real 0m1.337s 00:04:45.083 user 0m0.487s 00:04:45.083 sys 0m0.836s 00:04:45.083 21:59:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.083 21:59:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.083 ************************************ 00:04:45.083 END TEST guess_driver 00:04:45.083 ************************************ 00:04:45.083 21:59:31 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:45.083 ************************************ 00:04:45.083 END TEST driver 00:04:45.083 ************************************ 00:04:45.083 00:04:45.083 real 0m2.024s 00:04:45.083 user 0m0.740s 00:04:45.083 sys 0m1.316s 00:04:45.083 21:59:31 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.083 21:59:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.083 21:59:32 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:45.083 21:59:32 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:45.083 21:59:32 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.083 21:59:32 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.083 21:59:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:45.342 ************************************ 00:04:45.342 START TEST devices 00:04:45.342 ************************************ 00:04:45.342 21:59:32 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:45.342 * Looking for test storage... 00:04:45.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:45.342 21:59:32 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:45.342 21:59:32 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:45.342 21:59:32 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.342 21:59:32 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:45.908 21:59:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:45.908 21:59:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:45.908 21:59:32 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:45.908 No valid GPT data, bailing 00:04:45.908 21:59:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:45.908 21:59:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:45.908 21:59:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:45.908 21:59:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:45.908 21:59:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:45.908 21:59:32 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:45.908 21:59:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:45.909 21:59:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:45.909 21:59:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:45.909 21:59:32 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:46.167 No valid GPT data, bailing 00:04:46.167 21:59:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:46.167 21:59:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:46.167 21:59:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:46.167 21:59:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:46.167 21:59:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:46.167 21:59:32 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:46.167 21:59:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:46.167 21:59:32 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:46.167 No valid GPT data, bailing 00:04:46.167 21:59:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:46.167 21:59:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:46.167 21:59:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:46.167 21:59:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:46.167 21:59:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:46.167 21:59:32 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:46.167 21:59:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:46.167 21:59:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:46.167 21:59:32 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:46.167 No valid GPT data, bailing 00:04:46.167 21:59:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:46.167 21:59:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:46.167 21:59:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:46.167 21:59:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:46.167 21:59:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:46.167 21:59:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:46.167 21:59:33 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:46.167 21:59:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:46.167 21:59:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:46.167 21:59:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:46.167 21:59:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:46.167 21:59:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:46.167 21:59:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:46.167 21:59:33 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.167 21:59:33 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.167 21:59:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:46.167 ************************************ 00:04:46.167 START TEST nvme_mount 00:04:46.167 ************************************ 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:46.167 21:59:33 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:47.539 Creating new GPT entries in memory. 00:04:47.539 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:47.539 other utilities. 00:04:47.539 21:59:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:47.539 21:59:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.539 21:59:34 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:47.539 21:59:34 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:47.539 21:59:34 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:48.473 Creating new GPT entries in memory. 00:04:48.473 The operation has completed successfully. 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58996 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:48.473 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.730 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:48.730 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.730 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:48.730 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.731 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.731 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:48.731 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.731 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.731 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.731 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:48.731 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.731 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.731 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.731 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:48.731 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:48.731 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.731 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:48.988 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:48.988 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:48.988 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:48.988 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:48.988 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:48.988 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:48.988 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.988 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:48.988 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:48.988 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.988 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.988 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:48.988 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:48.989 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.989 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.989 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:48.989 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.989 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:48.989 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:48.989 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.989 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:48.989 21:59:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:48.989 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.989 21:59:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.246 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.246 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:49.246 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:49.246 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.246 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.246 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.246 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.246 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.503 21:59:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.760 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.760 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:49.760 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:49.760 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.760 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.760 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.760 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.760 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.018 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.018 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.018 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.018 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.018 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:50.018 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:50.018 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:50.018 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.018 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.018 21:59:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.018 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.018 00:04:50.018 real 0m3.748s 00:04:50.018 user 0m0.618s 00:04:50.018 sys 0m0.881s 00:04:50.018 21:59:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.018 21:59:36 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:50.018 ************************************ 00:04:50.018 END TEST nvme_mount 00:04:50.018 ************************************ 00:04:50.018 21:59:36 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:50.018 21:59:36 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:50.018 21:59:36 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.018 21:59:36 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.018 21:59:36 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:50.018 ************************************ 00:04:50.018 START TEST dm_mount 00:04:50.018 ************************************ 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:50.018 21:59:36 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:50.953 Creating new GPT entries in memory. 00:04:50.953 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:50.953 other utilities. 00:04:50.953 21:59:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:50.953 21:59:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.953 21:59:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:50.953 21:59:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:50.953 21:59:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:52.365 Creating new GPT entries in memory. 00:04:52.365 The operation has completed successfully. 00:04:52.365 21:59:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:52.365 21:59:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.365 21:59:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.365 21:59:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.365 21:59:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:53.300 The operation has completed successfully. 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59430 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.300 21:59:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.301 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.301 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:53.301 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:53.301 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.301 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.301 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.559 21:59:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.817 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.817 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:53.817 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:53.817 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.817 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.817 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:54.076 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:54.076 00:04:54.076 real 0m4.094s 00:04:54.076 user 0m0.452s 00:04:54.076 sys 0m0.605s 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.076 21:59:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:54.076 ************************************ 00:04:54.076 END TEST dm_mount 00:04:54.076 ************************************ 00:04:54.076 21:59:40 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:54.076 21:59:40 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:54.076 21:59:40 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:54.076 21:59:40 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.076 21:59:40 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.076 21:59:40 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:54.076 21:59:40 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:54.076 21:59:40 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:54.334 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:54.334 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:54.334 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:54.334 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:54.334 21:59:41 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:54.334 21:59:41 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.334 21:59:41 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:54.334 21:59:41 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.335 21:59:41 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:54.335 21:59:41 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:54.335 21:59:41 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:54.335 00:04:54.335 real 0m9.235s 00:04:54.335 user 0m1.668s 00:04:54.335 sys 0m2.007s 00:04:54.335 21:59:41 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.335 21:59:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:54.335 ************************************ 00:04:54.335 END TEST devices 00:04:54.335 ************************************ 00:04:54.593 21:59:41 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:54.593 00:04:54.593 real 0m20.350s 00:04:54.593 user 0m6.554s 00:04:54.593 sys 0m8.228s 00:04:54.593 21:59:41 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.593 ************************************ 00:04:54.593 END TEST setup.sh 00:04:54.593 21:59:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:54.593 ************************************ 00:04:54.593 21:59:41 -- common/autotest_common.sh@1142 -- # return 0 00:04:54.593 21:59:41 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:55.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.160 Hugepages 00:04:55.160 node hugesize free / total 00:04:55.160 node0 1048576kB 0 / 0 00:04:55.160 node0 2048kB 2048 / 2048 00:04:55.160 00:04:55.160 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.160 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:55.160 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:55.419 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:55.419 21:59:42 -- spdk/autotest.sh@130 -- # uname -s 00:04:55.419 21:59:42 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:55.419 21:59:42 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:55.419 21:59:42 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.985 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.985 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:56.243 21:59:42 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:57.205 21:59:43 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:57.205 21:59:43 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:57.205 21:59:43 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:57.205 21:59:43 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:57.205 21:59:43 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:57.205 21:59:43 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:57.205 21:59:43 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.205 21:59:43 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:57.205 21:59:43 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:57.205 21:59:43 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:57.205 21:59:43 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:57.205 21:59:43 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.463 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.463 Waiting for block devices as requested 00:04:57.463 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:57.722 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:57.722 21:59:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:57.722 21:59:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:57.722 21:59:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:57.722 21:59:44 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:57.722 21:59:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:57.722 21:59:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:57.722 21:59:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:57.722 21:59:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:57.722 21:59:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:57.722 21:59:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:57.722 21:59:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:57.722 21:59:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:57.722 21:59:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:57.722 21:59:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:57.722 21:59:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:57.722 21:59:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:57.722 21:59:44 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:57.722 21:59:44 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:57.722 21:59:44 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:57.722 21:59:44 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:57.722 21:59:44 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:57.722 21:59:44 -- common/autotest_common.sh@1557 -- # continue 00:04:57.722 21:59:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:57.722 21:59:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:57.722 21:59:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:57.722 21:59:44 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:57.722 21:59:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:57.722 21:59:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:57.722 21:59:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:57.722 21:59:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:57.722 21:59:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:57.722 21:59:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:57.722 21:59:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:57.722 21:59:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:57.722 21:59:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:57.722 21:59:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:57.722 21:59:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:57.722 21:59:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:57.722 21:59:44 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:57.722 21:59:44 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:57.722 21:59:44 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:57.722 21:59:44 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:57.722 21:59:44 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:57.722 21:59:44 -- common/autotest_common.sh@1557 -- # continue 00:04:57.722 21:59:44 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:57.722 21:59:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.722 21:59:44 -- common/autotest_common.sh@10 -- # set +x 00:04:57.722 21:59:44 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:57.722 21:59:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.722 21:59:44 -- common/autotest_common.sh@10 -- # set +x 00:04:57.722 21:59:44 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:58.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.548 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.548 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.548 21:59:45 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:58.548 21:59:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.548 21:59:45 -- common/autotest_common.sh@10 -- # set +x 00:04:58.548 21:59:45 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:58.548 21:59:45 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:58.548 21:59:45 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:58.548 21:59:45 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:58.548 21:59:45 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:58.548 21:59:45 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:58.548 21:59:45 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:58.548 21:59:45 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:58.548 21:59:45 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.548 21:59:45 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:58.548 21:59:45 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:58.806 21:59:45 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:58.806 21:59:45 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:58.806 21:59:45 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:58.806 21:59:45 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:58.806 21:59:45 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:58.806 21:59:45 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.806 21:59:45 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:58.806 21:59:45 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:58.806 21:59:45 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:58.806 21:59:45 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.806 21:59:45 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:58.806 21:59:45 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:58.806 21:59:45 -- common/autotest_common.sh@1593 -- # return 0 00:04:58.806 21:59:45 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:58.806 21:59:45 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:58.806 21:59:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:58.806 21:59:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:58.806 21:59:45 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:58.806 21:59:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:58.806 21:59:45 -- common/autotest_common.sh@10 -- # set +x 00:04:58.806 21:59:45 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:58.806 21:59:45 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:58.806 21:59:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.806 21:59:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.806 21:59:45 -- common/autotest_common.sh@10 -- # set +x 00:04:58.806 ************************************ 00:04:58.806 START TEST env 00:04:58.806 ************************************ 00:04:58.806 21:59:45 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:58.806 * Looking for test storage... 00:04:58.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:58.806 21:59:45 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:58.806 21:59:45 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.806 21:59:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.806 21:59:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.806 ************************************ 00:04:58.806 START TEST env_memory 00:04:58.806 ************************************ 00:04:58.806 21:59:45 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:58.806 00:04:58.806 00:04:58.807 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.807 http://cunit.sourceforge.net/ 00:04:58.807 00:04:58.807 00:04:58.807 Suite: memory 00:04:58.807 Test: alloc and free memory map ...[2024-07-15 21:59:45.682447] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:58.807 passed 00:04:58.807 Test: mem map translation ...[2024-07-15 21:59:45.718388] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:58.807 [2024-07-15 21:59:45.718457] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:58.807 [2024-07-15 21:59:45.718515] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:58.807 [2024-07-15 21:59:45.718526] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:59.065 passed 00:04:59.065 Test: mem map registration ...[2024-07-15 21:59:45.782366] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:59.065 [2024-07-15 21:59:45.782422] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:59.065 passed 00:04:59.065 Test: mem map adjacent registrations ...passed 00:04:59.065 00:04:59.065 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.065 suites 1 1 n/a 0 0 00:04:59.065 tests 4 4 4 0 0 00:04:59.065 asserts 152 152 152 0 n/a 00:04:59.065 00:04:59.065 Elapsed time = 0.224 seconds 00:04:59.065 00:04:59.065 real 0m0.243s 00:04:59.065 user 0m0.221s 00:04:59.065 sys 0m0.018s 00:04:59.065 21:59:45 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.065 21:59:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:59.065 ************************************ 00:04:59.065 END TEST env_memory 00:04:59.065 ************************************ 00:04:59.065 21:59:45 env -- common/autotest_common.sh@1142 -- # return 0 00:04:59.065 21:59:45 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:59.065 21:59:45 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.065 21:59:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.065 21:59:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.065 ************************************ 00:04:59.065 START TEST env_vtophys 00:04:59.065 ************************************ 00:04:59.065 21:59:45 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:59.065 EAL: lib.eal log level changed from notice to debug 00:04:59.065 EAL: Detected lcore 0 as core 0 on socket 0 00:04:59.065 EAL: Detected lcore 1 as core 0 on socket 0 00:04:59.065 EAL: Detected lcore 2 as core 0 on socket 0 00:04:59.065 EAL: Detected lcore 3 as core 0 on socket 0 00:04:59.065 EAL: Detected lcore 4 as core 0 on socket 0 00:04:59.065 EAL: Detected lcore 5 as core 0 on socket 0 00:04:59.065 EAL: Detected lcore 6 as core 0 on socket 0 00:04:59.065 EAL: Detected lcore 7 as core 0 on socket 0 00:04:59.065 EAL: Detected lcore 8 as core 0 on socket 0 00:04:59.065 EAL: Detected lcore 9 as core 0 on socket 0 00:04:59.065 EAL: Maximum logical cores by configuration: 128 00:04:59.065 EAL: Detected CPU lcores: 10 00:04:59.065 EAL: Detected NUMA nodes: 1 00:04:59.065 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:59.065 EAL: Detected shared linkage of DPDK 00:04:59.065 EAL: No shared files mode enabled, IPC will be disabled 00:04:59.065 EAL: Selected IOVA mode 'PA' 00:04:59.065 EAL: Probing VFIO support... 00:04:59.065 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:59.065 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:59.065 EAL: Ask a virtual area of 0x2e000 bytes 00:04:59.065 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:59.065 EAL: Setting up physically contiguous memory... 00:04:59.065 EAL: Setting maximum number of open files to 524288 00:04:59.065 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:59.065 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:59.065 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.065 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:59.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.065 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.065 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:59.065 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:59.065 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.065 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:59.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.065 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.065 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:59.065 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:59.065 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.065 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:59.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.065 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.065 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:59.065 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:59.065 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.065 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:59.065 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.065 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.065 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:59.065 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:59.065 EAL: Hugepages will be freed exactly as allocated. 00:04:59.065 EAL: No shared files mode enabled, IPC is disabled 00:04:59.065 EAL: No shared files mode enabled, IPC is disabled 00:04:59.324 EAL: TSC frequency is ~2200000 KHz 00:04:59.324 EAL: Main lcore 0 is ready (tid=7f7bfca9ea00;cpuset=[0]) 00:04:59.324 EAL: Trying to obtain current memory policy. 00:04:59.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.324 EAL: Restoring previous memory policy: 0 00:04:59.324 EAL: request: mp_malloc_sync 00:04:59.324 EAL: No shared files mode enabled, IPC is disabled 00:04:59.324 EAL: Heap on socket 0 was expanded by 2MB 00:04:59.324 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:59.324 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:59.324 EAL: Mem event callback 'spdk:(nil)' registered 00:04:59.324 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:59.324 00:04:59.324 00:04:59.324 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.324 http://cunit.sourceforge.net/ 00:04:59.324 00:04:59.324 00:04:59.324 Suite: components_suite 00:04:59.324 Test: vtophys_malloc_test ...passed 00:04:59.324 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:59.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.324 EAL: Restoring previous memory policy: 4 00:04:59.324 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.324 EAL: request: mp_malloc_sync 00:04:59.324 EAL: No shared files mode enabled, IPC is disabled 00:04:59.324 EAL: Heap on socket 0 was expanded by 4MB 00:04:59.324 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.324 EAL: request: mp_malloc_sync 00:04:59.324 EAL: No shared files mode enabled, IPC is disabled 00:04:59.324 EAL: Heap on socket 0 was shrunk by 4MB 00:04:59.324 EAL: Trying to obtain current memory policy. 00:04:59.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.324 EAL: Restoring previous memory policy: 4 00:04:59.324 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.324 EAL: request: mp_malloc_sync 00:04:59.324 EAL: No shared files mode enabled, IPC is disabled 00:04:59.324 EAL: Heap on socket 0 was expanded by 6MB 00:04:59.324 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.324 EAL: request: mp_malloc_sync 00:04:59.324 EAL: No shared files mode enabled, IPC is disabled 00:04:59.324 EAL: Heap on socket 0 was shrunk by 6MB 00:04:59.324 EAL: Trying to obtain current memory policy. 00:04:59.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.324 EAL: Restoring previous memory policy: 4 00:04:59.324 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.324 EAL: request: mp_malloc_sync 00:04:59.324 EAL: No shared files mode enabled, IPC is disabled 00:04:59.324 EAL: Heap on socket 0 was expanded by 10MB 00:04:59.324 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.324 EAL: request: mp_malloc_sync 00:04:59.324 EAL: No shared files mode enabled, IPC is disabled 00:04:59.324 EAL: Heap on socket 0 was shrunk by 10MB 00:04:59.324 EAL: Trying to obtain current memory policy. 00:04:59.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.324 EAL: Restoring previous memory policy: 4 00:04:59.325 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.325 EAL: request: mp_malloc_sync 00:04:59.325 EAL: No shared files mode enabled, IPC is disabled 00:04:59.325 EAL: Heap on socket 0 was expanded by 18MB 00:04:59.325 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.325 EAL: request: mp_malloc_sync 00:04:59.325 EAL: No shared files mode enabled, IPC is disabled 00:04:59.325 EAL: Heap on socket 0 was shrunk by 18MB 00:04:59.325 EAL: Trying to obtain current memory policy. 00:04:59.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.325 EAL: Restoring previous memory policy: 4 00:04:59.325 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.325 EAL: request: mp_malloc_sync 00:04:59.325 EAL: No shared files mode enabled, IPC is disabled 00:04:59.325 EAL: Heap on socket 0 was expanded by 34MB 00:04:59.325 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.325 EAL: request: mp_malloc_sync 00:04:59.325 EAL: No shared files mode enabled, IPC is disabled 00:04:59.325 EAL: Heap on socket 0 was shrunk by 34MB 00:04:59.325 EAL: Trying to obtain current memory policy. 00:04:59.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.325 EAL: Restoring previous memory policy: 4 00:04:59.325 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.325 EAL: request: mp_malloc_sync 00:04:59.325 EAL: No shared files mode enabled, IPC is disabled 00:04:59.325 EAL: Heap on socket 0 was expanded by 66MB 00:04:59.325 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.325 EAL: request: mp_malloc_sync 00:04:59.325 EAL: No shared files mode enabled, IPC is disabled 00:04:59.325 EAL: Heap on socket 0 was shrunk by 66MB 00:04:59.325 EAL: Trying to obtain current memory policy. 00:04:59.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.325 EAL: Restoring previous memory policy: 4 00:04:59.325 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.325 EAL: request: mp_malloc_sync 00:04:59.325 EAL: No shared files mode enabled, IPC is disabled 00:04:59.325 EAL: Heap on socket 0 was expanded by 130MB 00:04:59.325 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.325 EAL: request: mp_malloc_sync 00:04:59.325 EAL: No shared files mode enabled, IPC is disabled 00:04:59.325 EAL: Heap on socket 0 was shrunk by 130MB 00:04:59.325 EAL: Trying to obtain current memory policy. 00:04:59.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.325 EAL: Restoring previous memory policy: 4 00:04:59.325 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.325 EAL: request: mp_malloc_sync 00:04:59.325 EAL: No shared files mode enabled, IPC is disabled 00:04:59.325 EAL: Heap on socket 0 was expanded by 258MB 00:04:59.325 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.325 EAL: request: mp_malloc_sync 00:04:59.325 EAL: No shared files mode enabled, IPC is disabled 00:04:59.325 EAL: Heap on socket 0 was shrunk by 258MB 00:04:59.325 EAL: Trying to obtain current memory policy. 00:04:59.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.584 EAL: Restoring previous memory policy: 4 00:04:59.584 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.584 EAL: request: mp_malloc_sync 00:04:59.584 EAL: No shared files mode enabled, IPC is disabled 00:04:59.584 EAL: Heap on socket 0 was expanded by 514MB 00:04:59.584 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.584 EAL: request: mp_malloc_sync 00:04:59.584 EAL: No shared files mode enabled, IPC is disabled 00:04:59.584 EAL: Heap on socket 0 was shrunk by 514MB 00:04:59.584 EAL: Trying to obtain current memory policy. 00:04:59.584 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.842 EAL: Restoring previous memory policy: 4 00:04:59.842 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.842 EAL: request: mp_malloc_sync 00:04:59.842 EAL: No shared files mode enabled, IPC is disabled 00:04:59.842 EAL: Heap on socket 0 was expanded by 1026MB 00:04:59.842 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.842 EAL: request: mp_malloc_sync 00:04:59.842 EAL: No shared files mode enabled, IPC is disabled 00:04:59.842 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:59.842 passed 00:04:59.842 00:04:59.842 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.842 suites 1 1 n/a 0 0 00:04:59.842 tests 2 2 2 0 0 00:04:59.842 asserts 5246 5246 5246 0 n/a 00:04:59.842 00:04:59.842 Elapsed time = 0.660 seconds 00:04:59.842 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.842 EAL: request: mp_malloc_sync 00:04:59.842 EAL: No shared files mode enabled, IPC is disabled 00:04:59.842 EAL: Heap on socket 0 was shrunk by 2MB 00:04:59.842 EAL: No shared files mode enabled, IPC is disabled 00:04:59.842 EAL: No shared files mode enabled, IPC is disabled 00:04:59.842 EAL: No shared files mode enabled, IPC is disabled 00:04:59.842 00:04:59.842 real 0m0.846s 00:04:59.842 user 0m0.427s 00:04:59.842 sys 0m0.292s 00:04:59.842 21:59:46 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.842 21:59:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:59.842 ************************************ 00:04:59.842 END TEST env_vtophys 00:04:59.842 ************************************ 00:05:00.100 21:59:46 env -- common/autotest_common.sh@1142 -- # return 0 00:05:00.100 21:59:46 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:00.100 21:59:46 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.100 21:59:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.100 21:59:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.100 ************************************ 00:05:00.100 START TEST env_pci 00:05:00.100 ************************************ 00:05:00.100 21:59:46 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:00.100 00:05:00.100 00:05:00.100 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.100 http://cunit.sourceforge.net/ 00:05:00.100 00:05:00.100 00:05:00.100 Suite: pci 00:05:00.100 Test: pci_hook ...[2024-07-15 21:59:46.813602] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60606 has claimed it 00:05:00.100 passed 00:05:00.100 00:05:00.100 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.100 suites 1 1 n/a 0 0 00:05:00.100 tests 1 1 1 0 0 00:05:00.100 asserts 25 25 25 0 n/a 00:05:00.100 00:05:00.100 Elapsed time = 0.003 secondsEAL: Cannot find device (10000:00:01.0) 00:05:00.100 EAL: Failed to attach device on primary process 00:05:00.100 00:05:00.100 00:05:00.100 real 0m0.021s 00:05:00.100 user 0m0.009s 00:05:00.100 sys 0m0.011s 00:05:00.100 21:59:46 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.100 21:59:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:00.100 ************************************ 00:05:00.100 END TEST env_pci 00:05:00.100 ************************************ 00:05:00.101 21:59:46 env -- common/autotest_common.sh@1142 -- # return 0 00:05:00.101 21:59:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:00.101 21:59:46 env -- env/env.sh@15 -- # uname 00:05:00.101 21:59:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:00.101 21:59:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:00.101 21:59:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.101 21:59:46 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:00.101 21:59:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.101 21:59:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.101 ************************************ 00:05:00.101 START TEST env_dpdk_post_init 00:05:00.101 ************************************ 00:05:00.101 21:59:46 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.101 EAL: Detected CPU lcores: 10 00:05:00.101 EAL: Detected NUMA nodes: 1 00:05:00.101 EAL: Detected shared linkage of DPDK 00:05:00.101 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.101 EAL: Selected IOVA mode 'PA' 00:05:00.101 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:00.101 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:00.101 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:00.360 Starting DPDK initialization... 00:05:00.360 Starting SPDK post initialization... 00:05:00.360 SPDK NVMe probe 00:05:00.360 Attaching to 0000:00:10.0 00:05:00.360 Attaching to 0000:00:11.0 00:05:00.360 Attached to 0000:00:10.0 00:05:00.360 Attached to 0000:00:11.0 00:05:00.360 Cleaning up... 00:05:00.360 00:05:00.360 real 0m0.187s 00:05:00.360 user 0m0.046s 00:05:00.360 sys 0m0.040s 00:05:00.360 21:59:47 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.360 21:59:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.360 ************************************ 00:05:00.360 END TEST env_dpdk_post_init 00:05:00.360 ************************************ 00:05:00.360 21:59:47 env -- common/autotest_common.sh@1142 -- # return 0 00:05:00.360 21:59:47 env -- env/env.sh@26 -- # uname 00:05:00.360 21:59:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:00.360 21:59:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.360 21:59:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.360 21:59:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.360 21:59:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.360 ************************************ 00:05:00.360 START TEST env_mem_callbacks 00:05:00.360 ************************************ 00:05:00.360 21:59:47 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.360 EAL: Detected CPU lcores: 10 00:05:00.360 EAL: Detected NUMA nodes: 1 00:05:00.360 EAL: Detected shared linkage of DPDK 00:05:00.360 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.360 EAL: Selected IOVA mode 'PA' 00:05:00.360 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:00.360 00:05:00.360 00:05:00.360 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.360 http://cunit.sourceforge.net/ 00:05:00.360 00:05:00.360 00:05:00.360 Suite: memory 00:05:00.360 Test: test ... 00:05:00.360 register 0x200000200000 2097152 00:05:00.360 malloc 3145728 00:05:00.360 register 0x200000400000 4194304 00:05:00.360 buf 0x200000500000 len 3145728 PASSED 00:05:00.360 malloc 64 00:05:00.360 buf 0x2000004fff40 len 64 PASSED 00:05:00.360 malloc 4194304 00:05:00.360 register 0x200000800000 6291456 00:05:00.360 buf 0x200000a00000 len 4194304 PASSED 00:05:00.360 free 0x200000500000 3145728 00:05:00.360 free 0x2000004fff40 64 00:05:00.360 unregister 0x200000400000 4194304 PASSED 00:05:00.360 free 0x200000a00000 4194304 00:05:00.360 unregister 0x200000800000 6291456 PASSED 00:05:00.360 malloc 8388608 00:05:00.360 register 0x200000400000 10485760 00:05:00.360 buf 0x200000600000 len 8388608 PASSED 00:05:00.360 free 0x200000600000 8388608 00:05:00.360 unregister 0x200000400000 10485760 PASSED 00:05:00.360 passed 00:05:00.360 00:05:00.360 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.360 suites 1 1 n/a 0 0 00:05:00.360 tests 1 1 1 0 0 00:05:00.360 asserts 15 15 15 0 n/a 00:05:00.360 00:05:00.360 Elapsed time = 0.007 seconds 00:05:00.360 00:05:00.360 real 0m0.139s 00:05:00.360 user 0m0.023s 00:05:00.360 sys 0m0.015s 00:05:00.360 21:59:47 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.360 21:59:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:00.360 ************************************ 00:05:00.360 END TEST env_mem_callbacks 00:05:00.360 ************************************ 00:05:00.360 21:59:47 env -- common/autotest_common.sh@1142 -- # return 0 00:05:00.360 00:05:00.360 real 0m1.733s 00:05:00.360 user 0m0.827s 00:05:00.360 sys 0m0.565s 00:05:00.360 21:59:47 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.360 21:59:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.360 ************************************ 00:05:00.360 END TEST env 00:05:00.360 ************************************ 00:05:00.620 21:59:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:00.620 21:59:47 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:00.620 21:59:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.620 21:59:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.620 21:59:47 -- common/autotest_common.sh@10 -- # set +x 00:05:00.620 ************************************ 00:05:00.620 START TEST rpc 00:05:00.620 ************************************ 00:05:00.620 21:59:47 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:00.620 * Looking for test storage... 00:05:00.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:00.620 21:59:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60716 00:05:00.620 21:59:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.620 21:59:47 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:00.620 21:59:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60716 00:05:00.620 21:59:47 rpc -- common/autotest_common.sh@829 -- # '[' -z 60716 ']' 00:05:00.620 21:59:47 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.620 21:59:47 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.620 21:59:47 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.620 21:59:47 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.620 21:59:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.620 [2024-07-15 21:59:47.468005] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:00.620 [2024-07-15 21:59:47.468120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60716 ] 00:05:00.878 [2024-07-15 21:59:47.599010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.878 [2024-07-15 21:59:47.686013] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:00.878 [2024-07-15 21:59:47.686105] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60716' to capture a snapshot of events at runtime. 00:05:00.878 [2024-07-15 21:59:47.686127] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:00.878 [2024-07-15 21:59:47.686153] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:00.878 [2024-07-15 21:59:47.686165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60716 for offline analysis/debug. 00:05:00.878 [2024-07-15 21:59:47.686210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.814 21:59:48 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.814 21:59:48 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:01.814 21:59:48 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.814 21:59:48 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.814 21:59:48 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:01.814 21:59:48 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:01.814 21:59:48 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.814 21:59:48 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.814 21:59:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.814 ************************************ 00:05:01.814 START TEST rpc_integrity 00:05:01.814 ************************************ 00:05:01.814 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:01.814 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:01.814 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.814 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.814 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.814 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:01.814 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:01.814 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:01.814 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:01.814 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.814 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.814 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.814 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:01.814 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:01.815 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.815 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.815 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.815 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:01.815 { 00:05:01.815 "aliases": [ 00:05:01.815 "1817af7a-69bd-4137-9451-4cda499b596a" 00:05:01.815 ], 00:05:01.815 "assigned_rate_limits": { 00:05:01.815 "r_mbytes_per_sec": 0, 00:05:01.815 "rw_ios_per_sec": 0, 00:05:01.815 "rw_mbytes_per_sec": 0, 00:05:01.815 "w_mbytes_per_sec": 0 00:05:01.815 }, 00:05:01.815 "block_size": 512, 00:05:01.815 "claimed": false, 00:05:01.815 "driver_specific": {}, 00:05:01.815 "memory_domains": [ 00:05:01.815 { 00:05:01.815 "dma_device_id": "system", 00:05:01.815 "dma_device_type": 1 00:05:01.815 }, 00:05:01.815 { 00:05:01.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.815 "dma_device_type": 2 00:05:01.815 } 00:05:01.815 ], 00:05:01.815 "name": "Malloc0", 00:05:01.815 "num_blocks": 16384, 00:05:01.815 "product_name": "Malloc disk", 00:05:01.815 "supported_io_types": { 00:05:01.815 "abort": true, 00:05:01.815 "compare": false, 00:05:01.815 "compare_and_write": false, 00:05:01.815 "copy": true, 00:05:01.815 "flush": true, 00:05:01.815 "get_zone_info": false, 00:05:01.815 "nvme_admin": false, 00:05:01.815 "nvme_io": false, 00:05:01.815 "nvme_io_md": false, 00:05:01.815 "nvme_iov_md": false, 00:05:01.815 "read": true, 00:05:01.815 "reset": true, 00:05:01.815 "seek_data": false, 00:05:01.815 "seek_hole": false, 00:05:01.815 "unmap": true, 00:05:01.815 "write": true, 00:05:01.815 "write_zeroes": true, 00:05:01.815 "zcopy": true, 00:05:01.815 "zone_append": false, 00:05:01.815 "zone_management": false 00:05:01.815 }, 00:05:01.815 "uuid": "1817af7a-69bd-4137-9451-4cda499b596a", 00:05:01.815 "zoned": false 00:05:01.815 } 00:05:01.815 ]' 00:05:01.815 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:01.815 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:01.815 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:01.815 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.815 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.815 [2024-07-15 21:59:48.702501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:01.815 [2024-07-15 21:59:48.702558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.815 [2024-07-15 21:59:48.702577] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x85ead0 00:05:01.815 [2024-07-15 21:59:48.702587] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.815 [2024-07-15 21:59:48.704159] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.815 [2024-07-15 21:59:48.704195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:01.815 Passthru0 00:05:01.815 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.815 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:01.815 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.815 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.815 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.815 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:01.815 { 00:05:01.815 "aliases": [ 00:05:01.815 "1817af7a-69bd-4137-9451-4cda499b596a" 00:05:01.815 ], 00:05:01.815 "assigned_rate_limits": { 00:05:01.815 "r_mbytes_per_sec": 0, 00:05:01.815 "rw_ios_per_sec": 0, 00:05:01.815 "rw_mbytes_per_sec": 0, 00:05:01.815 "w_mbytes_per_sec": 0 00:05:01.815 }, 00:05:01.815 "block_size": 512, 00:05:01.815 "claim_type": "exclusive_write", 00:05:01.815 "claimed": true, 00:05:01.815 "driver_specific": {}, 00:05:01.815 "memory_domains": [ 00:05:01.815 { 00:05:01.815 "dma_device_id": "system", 00:05:01.815 "dma_device_type": 1 00:05:01.815 }, 00:05:01.815 { 00:05:01.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.815 "dma_device_type": 2 00:05:01.815 } 00:05:01.815 ], 00:05:01.815 "name": "Malloc0", 00:05:01.815 "num_blocks": 16384, 00:05:01.815 "product_name": "Malloc disk", 00:05:01.815 "supported_io_types": { 00:05:01.815 "abort": true, 00:05:01.815 "compare": false, 00:05:01.815 "compare_and_write": false, 00:05:01.815 "copy": true, 00:05:01.815 "flush": true, 00:05:01.815 "get_zone_info": false, 00:05:01.815 "nvme_admin": false, 00:05:01.815 "nvme_io": false, 00:05:01.815 "nvme_io_md": false, 00:05:01.815 "nvme_iov_md": false, 00:05:01.815 "read": true, 00:05:01.815 "reset": true, 00:05:01.815 "seek_data": false, 00:05:01.815 "seek_hole": false, 00:05:01.815 "unmap": true, 00:05:01.815 "write": true, 00:05:01.815 "write_zeroes": true, 00:05:01.815 "zcopy": true, 00:05:01.815 "zone_append": false, 00:05:01.815 "zone_management": false 00:05:01.815 }, 00:05:01.815 "uuid": "1817af7a-69bd-4137-9451-4cda499b596a", 00:05:01.815 "zoned": false 00:05:01.815 }, 00:05:01.815 { 00:05:01.815 "aliases": [ 00:05:01.815 "71fa8a73-48a5-5ad5-a62a-04fc97d9a7d6" 00:05:01.815 ], 00:05:01.815 "assigned_rate_limits": { 00:05:01.815 "r_mbytes_per_sec": 0, 00:05:01.815 "rw_ios_per_sec": 0, 00:05:01.815 "rw_mbytes_per_sec": 0, 00:05:01.815 "w_mbytes_per_sec": 0 00:05:01.815 }, 00:05:01.815 "block_size": 512, 00:05:01.815 "claimed": false, 00:05:01.815 "driver_specific": { 00:05:01.815 "passthru": { 00:05:01.815 "base_bdev_name": "Malloc0", 00:05:01.815 "name": "Passthru0" 00:05:01.815 } 00:05:01.815 }, 00:05:01.815 "memory_domains": [ 00:05:01.815 { 00:05:01.815 "dma_device_id": "system", 00:05:01.815 "dma_device_type": 1 00:05:01.815 }, 00:05:01.815 { 00:05:01.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.815 "dma_device_type": 2 00:05:01.815 } 00:05:01.815 ], 00:05:01.815 "name": "Passthru0", 00:05:01.815 "num_blocks": 16384, 00:05:01.815 "product_name": "passthru", 00:05:01.815 "supported_io_types": { 00:05:01.815 "abort": true, 00:05:01.815 "compare": false, 00:05:01.815 "compare_and_write": false, 00:05:01.815 "copy": true, 00:05:01.815 "flush": true, 00:05:01.815 "get_zone_info": false, 00:05:01.815 "nvme_admin": false, 00:05:01.815 "nvme_io": false, 00:05:01.815 "nvme_io_md": false, 00:05:01.815 "nvme_iov_md": false, 00:05:01.815 "read": true, 00:05:01.815 "reset": true, 00:05:01.815 "seek_data": false, 00:05:01.815 "seek_hole": false, 00:05:01.815 "unmap": true, 00:05:01.815 "write": true, 00:05:01.815 "write_zeroes": true, 00:05:01.815 "zcopy": true, 00:05:01.815 "zone_append": false, 00:05:01.815 "zone_management": false 00:05:01.815 }, 00:05:01.815 "uuid": "71fa8a73-48a5-5ad5-a62a-04fc97d9a7d6", 00:05:01.815 "zoned": false 00:05:01.815 } 00:05:01.815 ]' 00:05:01.815 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:02.075 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:02.075 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:02.075 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.075 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.075 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.075 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:02.075 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.075 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.075 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.075 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:02.075 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.075 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.075 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.075 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:02.075 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:02.075 21:59:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:02.075 00:05:02.075 real 0m0.309s 00:05:02.075 user 0m0.198s 00:05:02.075 sys 0m0.034s 00:05:02.075 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.075 21:59:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.075 ************************************ 00:05:02.075 END TEST rpc_integrity 00:05:02.075 ************************************ 00:05:02.075 21:59:48 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:02.075 21:59:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:02.075 21:59:48 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.075 21:59:48 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.075 21:59:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.075 ************************************ 00:05:02.075 START TEST rpc_plugins 00:05:02.075 ************************************ 00:05:02.075 21:59:48 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:02.075 21:59:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:02.075 21:59:48 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.075 21:59:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.075 21:59:48 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.075 21:59:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:02.075 21:59:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:02.075 21:59:48 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.075 21:59:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.075 21:59:48 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.075 21:59:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:02.075 { 00:05:02.075 "aliases": [ 00:05:02.075 "570c5040-c1c5-4d97-89fa-dcc5f39cb72f" 00:05:02.075 ], 00:05:02.075 "assigned_rate_limits": { 00:05:02.075 "r_mbytes_per_sec": 0, 00:05:02.075 "rw_ios_per_sec": 0, 00:05:02.075 "rw_mbytes_per_sec": 0, 00:05:02.075 "w_mbytes_per_sec": 0 00:05:02.075 }, 00:05:02.075 "block_size": 4096, 00:05:02.075 "claimed": false, 00:05:02.075 "driver_specific": {}, 00:05:02.075 "memory_domains": [ 00:05:02.075 { 00:05:02.075 "dma_device_id": "system", 00:05:02.075 "dma_device_type": 1 00:05:02.075 }, 00:05:02.075 { 00:05:02.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.075 "dma_device_type": 2 00:05:02.075 } 00:05:02.075 ], 00:05:02.075 "name": "Malloc1", 00:05:02.075 "num_blocks": 256, 00:05:02.075 "product_name": "Malloc disk", 00:05:02.075 "supported_io_types": { 00:05:02.075 "abort": true, 00:05:02.075 "compare": false, 00:05:02.075 "compare_and_write": false, 00:05:02.075 "copy": true, 00:05:02.075 "flush": true, 00:05:02.075 "get_zone_info": false, 00:05:02.075 "nvme_admin": false, 00:05:02.075 "nvme_io": false, 00:05:02.075 "nvme_io_md": false, 00:05:02.075 "nvme_iov_md": false, 00:05:02.075 "read": true, 00:05:02.075 "reset": true, 00:05:02.075 "seek_data": false, 00:05:02.075 "seek_hole": false, 00:05:02.075 "unmap": true, 00:05:02.075 "write": true, 00:05:02.075 "write_zeroes": true, 00:05:02.075 "zcopy": true, 00:05:02.075 "zone_append": false, 00:05:02.075 "zone_management": false 00:05:02.075 }, 00:05:02.075 "uuid": "570c5040-c1c5-4d97-89fa-dcc5f39cb72f", 00:05:02.075 "zoned": false 00:05:02.075 } 00:05:02.075 ]' 00:05:02.075 21:59:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:02.075 21:59:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:02.075 21:59:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:02.075 21:59:48 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.075 21:59:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.075 21:59:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.075 21:59:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:02.075 21:59:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.075 21:59:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.075 21:59:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.075 21:59:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:02.075 21:59:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:02.334 21:59:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:02.334 00:05:02.334 real 0m0.183s 00:05:02.334 user 0m0.123s 00:05:02.334 sys 0m0.021s 00:05:02.334 21:59:49 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.334 21:59:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.334 ************************************ 00:05:02.334 END TEST rpc_plugins 00:05:02.334 ************************************ 00:05:02.334 21:59:49 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:02.334 21:59:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:02.334 21:59:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.335 21:59:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.335 21:59:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.335 ************************************ 00:05:02.335 START TEST rpc_trace_cmd_test 00:05:02.335 ************************************ 00:05:02.335 21:59:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:02.335 21:59:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:02.335 21:59:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:02.335 21:59:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.335 21:59:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:02.335 21:59:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.335 21:59:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:02.335 "bdev": { 00:05:02.335 "mask": "0x8", 00:05:02.335 "tpoint_mask": "0xffffffffffffffff" 00:05:02.335 }, 00:05:02.335 "bdev_nvme": { 00:05:02.335 "mask": "0x4000", 00:05:02.335 "tpoint_mask": "0x0" 00:05:02.335 }, 00:05:02.335 "blobfs": { 00:05:02.335 "mask": "0x80", 00:05:02.335 "tpoint_mask": "0x0" 00:05:02.335 }, 00:05:02.335 "dsa": { 00:05:02.335 "mask": "0x200", 00:05:02.335 "tpoint_mask": "0x0" 00:05:02.335 }, 00:05:02.335 "ftl": { 00:05:02.335 "mask": "0x40", 00:05:02.335 "tpoint_mask": "0x0" 00:05:02.335 }, 00:05:02.335 "iaa": { 00:05:02.335 "mask": "0x1000", 00:05:02.335 "tpoint_mask": "0x0" 00:05:02.335 }, 00:05:02.335 "iscsi_conn": { 00:05:02.335 "mask": "0x2", 00:05:02.335 "tpoint_mask": "0x0" 00:05:02.335 }, 00:05:02.335 "nvme_pcie": { 00:05:02.335 "mask": "0x800", 00:05:02.335 "tpoint_mask": "0x0" 00:05:02.335 }, 00:05:02.335 "nvme_tcp": { 00:05:02.335 "mask": "0x2000", 00:05:02.335 "tpoint_mask": "0x0" 00:05:02.335 }, 00:05:02.335 "nvmf_rdma": { 00:05:02.335 "mask": "0x10", 00:05:02.335 "tpoint_mask": "0x0" 00:05:02.335 }, 00:05:02.335 "nvmf_tcp": { 00:05:02.335 "mask": "0x20", 00:05:02.335 "tpoint_mask": "0x0" 00:05:02.335 }, 00:05:02.335 "scsi": { 00:05:02.335 "mask": "0x4", 00:05:02.335 "tpoint_mask": "0x0" 00:05:02.335 }, 00:05:02.335 "sock": { 00:05:02.335 "mask": "0x8000", 00:05:02.335 "tpoint_mask": "0x0" 00:05:02.335 }, 00:05:02.335 "thread": { 00:05:02.335 "mask": "0x400", 00:05:02.335 "tpoint_mask": "0x0" 00:05:02.335 }, 00:05:02.335 "tpoint_group_mask": "0x8", 00:05:02.335 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60716" 00:05:02.335 }' 00:05:02.335 21:59:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:02.335 21:59:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:02.335 21:59:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:02.335 21:59:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:02.335 21:59:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:02.594 21:59:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:02.594 21:59:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:02.594 21:59:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:02.594 21:59:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:02.594 21:59:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:02.594 00:05:02.594 real 0m0.266s 00:05:02.594 user 0m0.230s 00:05:02.594 sys 0m0.025s 00:05:02.594 21:59:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.594 ************************************ 00:05:02.594 END TEST rpc_trace_cmd_test 00:05:02.594 21:59:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:02.594 ************************************ 00:05:02.594 21:59:49 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:02.594 21:59:49 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:02.594 21:59:49 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:02.594 21:59:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.594 21:59:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.594 21:59:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.594 ************************************ 00:05:02.594 START TEST go_rpc 00:05:02.594 ************************************ 00:05:02.594 21:59:49 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:05:02.594 21:59:49 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:02.594 21:59:49 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:02.594 21:59:49 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:02.594 21:59:49 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:02.594 21:59:49 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.594 21:59:49 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.594 21:59:49 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.594 21:59:49 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.594 21:59:49 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:02.594 21:59:49 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:02.851 21:59:49 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["eab4b227-da35-484c-84fc-1a5b9217f4c6"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"eab4b227-da35-484c-84fc-1a5b9217f4c6","zoned":false}]' 00:05:02.851 21:59:49 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:02.851 21:59:49 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:02.851 21:59:49 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:02.851 21:59:49 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.851 21:59:49 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.851 21:59:49 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.851 21:59:49 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:02.851 21:59:49 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:02.851 21:59:49 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:02.851 21:59:49 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:02.851 00:05:02.851 real 0m0.222s 00:05:02.851 user 0m0.155s 00:05:02.851 sys 0m0.032s 00:05:02.851 21:59:49 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.851 21:59:49 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.851 ************************************ 00:05:02.851 END TEST go_rpc 00:05:02.851 ************************************ 00:05:02.851 21:59:49 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:02.851 21:59:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:02.851 21:59:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:02.851 21:59:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.851 21:59:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.851 21:59:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.851 ************************************ 00:05:02.851 START TEST rpc_daemon_integrity 00:05:02.851 ************************************ 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.851 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.109 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.109 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.109 { 00:05:03.109 "aliases": [ 00:05:03.109 "6c668a9f-34a6-44f4-8135-ad3a388177d1" 00:05:03.109 ], 00:05:03.109 "assigned_rate_limits": { 00:05:03.109 "r_mbytes_per_sec": 0, 00:05:03.109 "rw_ios_per_sec": 0, 00:05:03.109 "rw_mbytes_per_sec": 0, 00:05:03.109 "w_mbytes_per_sec": 0 00:05:03.109 }, 00:05:03.109 "block_size": 512, 00:05:03.109 "claimed": false, 00:05:03.109 "driver_specific": {}, 00:05:03.109 "memory_domains": [ 00:05:03.109 { 00:05:03.109 "dma_device_id": "system", 00:05:03.109 "dma_device_type": 1 00:05:03.109 }, 00:05:03.109 { 00:05:03.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.109 "dma_device_type": 2 00:05:03.109 } 00:05:03.109 ], 00:05:03.109 "name": "Malloc3", 00:05:03.109 "num_blocks": 16384, 00:05:03.109 "product_name": "Malloc disk", 00:05:03.109 "supported_io_types": { 00:05:03.109 "abort": true, 00:05:03.109 "compare": false, 00:05:03.109 "compare_and_write": false, 00:05:03.109 "copy": true, 00:05:03.110 "flush": true, 00:05:03.110 "get_zone_info": false, 00:05:03.110 "nvme_admin": false, 00:05:03.110 "nvme_io": false, 00:05:03.110 "nvme_io_md": false, 00:05:03.110 "nvme_iov_md": false, 00:05:03.110 "read": true, 00:05:03.110 "reset": true, 00:05:03.110 "seek_data": false, 00:05:03.110 "seek_hole": false, 00:05:03.110 "unmap": true, 00:05:03.110 "write": true, 00:05:03.110 "write_zeroes": true, 00:05:03.110 "zcopy": true, 00:05:03.110 "zone_append": false, 00:05:03.110 "zone_management": false 00:05:03.110 }, 00:05:03.110 "uuid": "6c668a9f-34a6-44f4-8135-ad3a388177d1", 00:05:03.110 "zoned": false 00:05:03.110 } 00:05:03.110 ]' 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.110 [2024-07-15 21:59:49.886936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:03.110 [2024-07-15 21:59:49.886997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.110 [2024-07-15 21:59:49.887018] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa55d70 00:05:03.110 [2024-07-15 21:59:49.887027] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.110 [2024-07-15 21:59:49.888557] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.110 [2024-07-15 21:59:49.888593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.110 Passthru0 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.110 { 00:05:03.110 "aliases": [ 00:05:03.110 "6c668a9f-34a6-44f4-8135-ad3a388177d1" 00:05:03.110 ], 00:05:03.110 "assigned_rate_limits": { 00:05:03.110 "r_mbytes_per_sec": 0, 00:05:03.110 "rw_ios_per_sec": 0, 00:05:03.110 "rw_mbytes_per_sec": 0, 00:05:03.110 "w_mbytes_per_sec": 0 00:05:03.110 }, 00:05:03.110 "block_size": 512, 00:05:03.110 "claim_type": "exclusive_write", 00:05:03.110 "claimed": true, 00:05:03.110 "driver_specific": {}, 00:05:03.110 "memory_domains": [ 00:05:03.110 { 00:05:03.110 "dma_device_id": "system", 00:05:03.110 "dma_device_type": 1 00:05:03.110 }, 00:05:03.110 { 00:05:03.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.110 "dma_device_type": 2 00:05:03.110 } 00:05:03.110 ], 00:05:03.110 "name": "Malloc3", 00:05:03.110 "num_blocks": 16384, 00:05:03.110 "product_name": "Malloc disk", 00:05:03.110 "supported_io_types": { 00:05:03.110 "abort": true, 00:05:03.110 "compare": false, 00:05:03.110 "compare_and_write": false, 00:05:03.110 "copy": true, 00:05:03.110 "flush": true, 00:05:03.110 "get_zone_info": false, 00:05:03.110 "nvme_admin": false, 00:05:03.110 "nvme_io": false, 00:05:03.110 "nvme_io_md": false, 00:05:03.110 "nvme_iov_md": false, 00:05:03.110 "read": true, 00:05:03.110 "reset": true, 00:05:03.110 "seek_data": false, 00:05:03.110 "seek_hole": false, 00:05:03.110 "unmap": true, 00:05:03.110 "write": true, 00:05:03.110 "write_zeroes": true, 00:05:03.110 "zcopy": true, 00:05:03.110 "zone_append": false, 00:05:03.110 "zone_management": false 00:05:03.110 }, 00:05:03.110 "uuid": "6c668a9f-34a6-44f4-8135-ad3a388177d1", 00:05:03.110 "zoned": false 00:05:03.110 }, 00:05:03.110 { 00:05:03.110 "aliases": [ 00:05:03.110 "94957df3-65c4-5264-ba58-e9466f4c6bb2" 00:05:03.110 ], 00:05:03.110 "assigned_rate_limits": { 00:05:03.110 "r_mbytes_per_sec": 0, 00:05:03.110 "rw_ios_per_sec": 0, 00:05:03.110 "rw_mbytes_per_sec": 0, 00:05:03.110 "w_mbytes_per_sec": 0 00:05:03.110 }, 00:05:03.110 "block_size": 512, 00:05:03.110 "claimed": false, 00:05:03.110 "driver_specific": { 00:05:03.110 "passthru": { 00:05:03.110 "base_bdev_name": "Malloc3", 00:05:03.110 "name": "Passthru0" 00:05:03.110 } 00:05:03.110 }, 00:05:03.110 "memory_domains": [ 00:05:03.110 { 00:05:03.110 "dma_device_id": "system", 00:05:03.110 "dma_device_type": 1 00:05:03.110 }, 00:05:03.110 { 00:05:03.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.110 "dma_device_type": 2 00:05:03.110 } 00:05:03.110 ], 00:05:03.110 "name": "Passthru0", 00:05:03.110 "num_blocks": 16384, 00:05:03.110 "product_name": "passthru", 00:05:03.110 "supported_io_types": { 00:05:03.110 "abort": true, 00:05:03.110 "compare": false, 00:05:03.110 "compare_and_write": false, 00:05:03.110 "copy": true, 00:05:03.110 "flush": true, 00:05:03.110 "get_zone_info": false, 00:05:03.110 "nvme_admin": false, 00:05:03.110 "nvme_io": false, 00:05:03.110 "nvme_io_md": false, 00:05:03.110 "nvme_iov_md": false, 00:05:03.110 "read": true, 00:05:03.110 "reset": true, 00:05:03.110 "seek_data": false, 00:05:03.110 "seek_hole": false, 00:05:03.110 "unmap": true, 00:05:03.110 "write": true, 00:05:03.110 "write_zeroes": true, 00:05:03.110 "zcopy": true, 00:05:03.110 "zone_append": false, 00:05:03.110 "zone_management": false 00:05:03.110 }, 00:05:03.110 "uuid": "94957df3-65c4-5264-ba58-e9466f4c6bb2", 00:05:03.110 "zoned": false 00:05:03.110 } 00:05:03.110 ]' 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.110 21:59:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:03.110 ************************************ 00:05:03.110 END TEST rpc_daemon_integrity 00:05:03.110 ************************************ 00:05:03.110 21:59:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.110 00:05:03.110 real 0m0.334s 00:05:03.110 user 0m0.228s 00:05:03.110 sys 0m0.040s 00:05:03.110 21:59:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.110 21:59:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.367 21:59:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:03.367 21:59:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:03.367 21:59:50 rpc -- rpc/rpc.sh@84 -- # killprocess 60716 00:05:03.367 21:59:50 rpc -- common/autotest_common.sh@948 -- # '[' -z 60716 ']' 00:05:03.367 21:59:50 rpc -- common/autotest_common.sh@952 -- # kill -0 60716 00:05:03.367 21:59:50 rpc -- common/autotest_common.sh@953 -- # uname 00:05:03.367 21:59:50 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.367 21:59:50 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60716 00:05:03.367 killing process with pid 60716 00:05:03.367 21:59:50 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.367 21:59:50 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.367 21:59:50 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60716' 00:05:03.367 21:59:50 rpc -- common/autotest_common.sh@967 -- # kill 60716 00:05:03.367 21:59:50 rpc -- common/autotest_common.sh@972 -- # wait 60716 00:05:03.624 ************************************ 00:05:03.624 END TEST rpc 00:05:03.624 00:05:03.624 real 0m3.060s 00:05:03.624 user 0m4.234s 00:05:03.624 sys 0m0.651s 00:05:03.624 21:59:50 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.624 21:59:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.624 ************************************ 00:05:03.624 21:59:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:03.624 21:59:50 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:03.624 21:59:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.624 21:59:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.624 21:59:50 -- common/autotest_common.sh@10 -- # set +x 00:05:03.624 ************************************ 00:05:03.624 START TEST skip_rpc 00:05:03.624 ************************************ 00:05:03.624 21:59:50 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:03.624 * Looking for test storage... 00:05:03.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:03.624 21:59:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:03.625 21:59:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:03.625 21:59:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:03.625 21:59:50 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.625 21:59:50 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.625 21:59:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.625 ************************************ 00:05:03.625 START TEST skip_rpc 00:05:03.625 ************************************ 00:05:03.625 21:59:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:03.625 21:59:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60977 00:05:03.625 21:59:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:03.625 21:59:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.625 21:59:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:03.882 [2024-07-15 21:59:50.582160] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:03.882 [2024-07-15 21:59:50.582258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60977 ] 00:05:03.882 [2024-07-15 21:59:50.748934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.882 [2024-07-15 21:59:50.820587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.138 2024/07/15 21:59:55 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60977 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 60977 ']' 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 60977 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60977 00:05:09.138 killing process with pid 60977 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60977' 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 60977 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 60977 00:05:09.138 ************************************ 00:05:09.138 END TEST skip_rpc 00:05:09.138 ************************************ 00:05:09.138 00:05:09.138 real 0m5.288s 00:05:09.138 user 0m5.025s 00:05:09.138 sys 0m0.160s 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.138 21:59:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.138 21:59:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:09.138 21:59:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:09.138 21:59:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.138 21:59:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.138 21:59:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.138 ************************************ 00:05:09.138 START TEST skip_rpc_with_json 00:05:09.138 ************************************ 00:05:09.138 21:59:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:09.138 21:59:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:09.138 21:59:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61064 00:05:09.138 21:59:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.138 21:59:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61064 00:05:09.138 21:59:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.139 21:59:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 61064 ']' 00:05:09.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.139 21:59:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.139 21:59:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.139 21:59:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.139 21:59:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.139 21:59:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.139 [2024-07-15 21:59:55.903508] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:09.139 [2024-07-15 21:59:55.903596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61064 ] 00:05:09.139 [2024-07-15 21:59:56.038925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.397 [2024-07-15 21:59:56.099156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.397 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.397 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:09.397 21:59:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:09.397 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.397 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.397 [2024-07-15 21:59:56.258924] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:09.397 2024/07/15 21:59:56 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:09.397 request: 00:05:09.397 { 00:05:09.397 "method": "nvmf_get_transports", 00:05:09.397 "params": { 00:05:09.397 "trtype": "tcp" 00:05:09.397 } 00:05:09.397 } 00:05:09.397 Got JSON-RPC error response 00:05:09.397 GoRPCClient: error on JSON-RPC call 00:05:09.397 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:09.397 21:59:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:09.397 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.397 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.397 [2024-07-15 21:59:56.271040] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.397 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.397 21:59:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:09.397 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.397 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.656 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.656 21:59:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:09.656 { 00:05:09.656 "subsystems": [ 00:05:09.656 { 00:05:09.656 "subsystem": "keyring", 00:05:09.656 "config": [] 00:05:09.656 }, 00:05:09.656 { 00:05:09.656 "subsystem": "iobuf", 00:05:09.656 "config": [ 00:05:09.656 { 00:05:09.656 "method": "iobuf_set_options", 00:05:09.656 "params": { 00:05:09.656 "large_bufsize": 135168, 00:05:09.656 "large_pool_count": 1024, 00:05:09.656 "small_bufsize": 8192, 00:05:09.656 "small_pool_count": 8192 00:05:09.656 } 00:05:09.656 } 00:05:09.656 ] 00:05:09.656 }, 00:05:09.656 { 00:05:09.656 "subsystem": "sock", 00:05:09.656 "config": [ 00:05:09.656 { 00:05:09.656 "method": "sock_set_default_impl", 00:05:09.656 "params": { 00:05:09.656 "impl_name": "posix" 00:05:09.656 } 00:05:09.656 }, 00:05:09.656 { 00:05:09.656 "method": "sock_impl_set_options", 00:05:09.656 "params": { 00:05:09.656 "enable_ktls": false, 00:05:09.656 "enable_placement_id": 0, 00:05:09.656 "enable_quickack": false, 00:05:09.656 "enable_recv_pipe": true, 00:05:09.656 "enable_zerocopy_send_client": false, 00:05:09.656 "enable_zerocopy_send_server": true, 00:05:09.656 "impl_name": "ssl", 00:05:09.656 "recv_buf_size": 4096, 00:05:09.656 "send_buf_size": 4096, 00:05:09.656 "tls_version": 0, 00:05:09.656 "zerocopy_threshold": 0 00:05:09.656 } 00:05:09.656 }, 00:05:09.656 { 00:05:09.656 "method": "sock_impl_set_options", 00:05:09.656 "params": { 00:05:09.656 "enable_ktls": false, 00:05:09.656 "enable_placement_id": 0, 00:05:09.656 "enable_quickack": false, 00:05:09.656 "enable_recv_pipe": true, 00:05:09.656 "enable_zerocopy_send_client": false, 00:05:09.656 "enable_zerocopy_send_server": true, 00:05:09.656 "impl_name": "posix", 00:05:09.656 "recv_buf_size": 2097152, 00:05:09.656 "send_buf_size": 2097152, 00:05:09.656 "tls_version": 0, 00:05:09.656 "zerocopy_threshold": 0 00:05:09.656 } 00:05:09.656 } 00:05:09.656 ] 00:05:09.656 }, 00:05:09.656 { 00:05:09.656 "subsystem": "vmd", 00:05:09.656 "config": [] 00:05:09.656 }, 00:05:09.656 { 00:05:09.656 "subsystem": "accel", 00:05:09.656 "config": [ 00:05:09.656 { 00:05:09.656 "method": "accel_set_options", 00:05:09.656 "params": { 00:05:09.656 "buf_count": 2048, 00:05:09.656 "large_cache_size": 16, 00:05:09.656 "sequence_count": 2048, 00:05:09.656 "small_cache_size": 128, 00:05:09.656 "task_count": 2048 00:05:09.656 } 00:05:09.656 } 00:05:09.656 ] 00:05:09.656 }, 00:05:09.656 { 00:05:09.656 "subsystem": "bdev", 00:05:09.656 "config": [ 00:05:09.656 { 00:05:09.656 "method": "bdev_set_options", 00:05:09.656 "params": { 00:05:09.656 "bdev_auto_examine": true, 00:05:09.656 "bdev_io_cache_size": 256, 00:05:09.656 "bdev_io_pool_size": 65535, 00:05:09.656 "iobuf_large_cache_size": 16, 00:05:09.656 "iobuf_small_cache_size": 128 00:05:09.656 } 00:05:09.656 }, 00:05:09.656 { 00:05:09.656 "method": "bdev_raid_set_options", 00:05:09.656 "params": { 00:05:09.656 "process_window_size_kb": 1024 00:05:09.656 } 00:05:09.656 }, 00:05:09.656 { 00:05:09.656 "method": "bdev_iscsi_set_options", 00:05:09.656 "params": { 00:05:09.656 "timeout_sec": 30 00:05:09.656 } 00:05:09.656 }, 00:05:09.656 { 00:05:09.656 "method": "bdev_nvme_set_options", 00:05:09.656 "params": { 00:05:09.656 "action_on_timeout": "none", 00:05:09.656 "allow_accel_sequence": false, 00:05:09.656 "arbitration_burst": 0, 00:05:09.656 "bdev_retry_count": 3, 00:05:09.656 "ctrlr_loss_timeout_sec": 0, 00:05:09.656 "delay_cmd_submit": true, 00:05:09.656 "dhchap_dhgroups": [ 00:05:09.656 "null", 00:05:09.656 "ffdhe2048", 00:05:09.656 "ffdhe3072", 00:05:09.657 "ffdhe4096", 00:05:09.657 "ffdhe6144", 00:05:09.657 "ffdhe8192" 00:05:09.657 ], 00:05:09.657 "dhchap_digests": [ 00:05:09.657 "sha256", 00:05:09.657 "sha384", 00:05:09.657 "sha512" 00:05:09.657 ], 00:05:09.657 "disable_auto_failback": false, 00:05:09.657 "fast_io_fail_timeout_sec": 0, 00:05:09.657 "generate_uuids": false, 00:05:09.657 "high_priority_weight": 0, 00:05:09.657 "io_path_stat": false, 00:05:09.657 "io_queue_requests": 0, 00:05:09.657 "keep_alive_timeout_ms": 10000, 00:05:09.657 "low_priority_weight": 0, 00:05:09.657 "medium_priority_weight": 0, 00:05:09.657 "nvme_adminq_poll_period_us": 10000, 00:05:09.657 "nvme_error_stat": false, 00:05:09.657 "nvme_ioq_poll_period_us": 0, 00:05:09.657 "rdma_cm_event_timeout_ms": 0, 00:05:09.657 "rdma_max_cq_size": 0, 00:05:09.657 "rdma_srq_size": 0, 00:05:09.657 "reconnect_delay_sec": 0, 00:05:09.657 "timeout_admin_us": 0, 00:05:09.657 "timeout_us": 0, 00:05:09.657 "transport_ack_timeout": 0, 00:05:09.657 "transport_retry_count": 4, 00:05:09.657 "transport_tos": 0 00:05:09.657 } 00:05:09.657 }, 00:05:09.657 { 00:05:09.657 "method": "bdev_nvme_set_hotplug", 00:05:09.657 "params": { 00:05:09.657 "enable": false, 00:05:09.657 "period_us": 100000 00:05:09.657 } 00:05:09.657 }, 00:05:09.657 { 00:05:09.657 "method": "bdev_wait_for_examine" 00:05:09.657 } 00:05:09.657 ] 00:05:09.657 }, 00:05:09.657 { 00:05:09.657 "subsystem": "scsi", 00:05:09.657 "config": null 00:05:09.657 }, 00:05:09.657 { 00:05:09.657 "subsystem": "scheduler", 00:05:09.657 "config": [ 00:05:09.657 { 00:05:09.657 "method": "framework_set_scheduler", 00:05:09.657 "params": { 00:05:09.657 "name": "static" 00:05:09.657 } 00:05:09.657 } 00:05:09.657 ] 00:05:09.657 }, 00:05:09.657 { 00:05:09.657 "subsystem": "vhost_scsi", 00:05:09.657 "config": [] 00:05:09.657 }, 00:05:09.657 { 00:05:09.657 "subsystem": "vhost_blk", 00:05:09.657 "config": [] 00:05:09.657 }, 00:05:09.657 { 00:05:09.657 "subsystem": "ublk", 00:05:09.657 "config": [] 00:05:09.657 }, 00:05:09.657 { 00:05:09.657 "subsystem": "nbd", 00:05:09.657 "config": [] 00:05:09.657 }, 00:05:09.657 { 00:05:09.657 "subsystem": "nvmf", 00:05:09.657 "config": [ 00:05:09.657 { 00:05:09.657 "method": "nvmf_set_config", 00:05:09.657 "params": { 00:05:09.657 "admin_cmd_passthru": { 00:05:09.657 "identify_ctrlr": false 00:05:09.657 }, 00:05:09.657 "discovery_filter": "match_any" 00:05:09.657 } 00:05:09.657 }, 00:05:09.657 { 00:05:09.657 "method": "nvmf_set_max_subsystems", 00:05:09.657 "params": { 00:05:09.657 "max_subsystems": 1024 00:05:09.657 } 00:05:09.657 }, 00:05:09.657 { 00:05:09.657 "method": "nvmf_set_crdt", 00:05:09.657 "params": { 00:05:09.657 "crdt1": 0, 00:05:09.657 "crdt2": 0, 00:05:09.657 "crdt3": 0 00:05:09.657 } 00:05:09.657 }, 00:05:09.657 { 00:05:09.657 "method": "nvmf_create_transport", 00:05:09.657 "params": { 00:05:09.657 "abort_timeout_sec": 1, 00:05:09.657 "ack_timeout": 0, 00:05:09.657 "buf_cache_size": 4294967295, 00:05:09.657 "c2h_success": true, 00:05:09.657 "data_wr_pool_size": 0, 00:05:09.657 "dif_insert_or_strip": false, 00:05:09.657 "in_capsule_data_size": 4096, 00:05:09.657 "io_unit_size": 131072, 00:05:09.657 "max_aq_depth": 128, 00:05:09.657 "max_io_qpairs_per_ctrlr": 127, 00:05:09.657 "max_io_size": 131072, 00:05:09.657 "max_queue_depth": 128, 00:05:09.657 "num_shared_buffers": 511, 00:05:09.657 "sock_priority": 0, 00:05:09.657 "trtype": "TCP", 00:05:09.657 "zcopy": false 00:05:09.657 } 00:05:09.657 } 00:05:09.657 ] 00:05:09.657 }, 00:05:09.657 { 00:05:09.657 "subsystem": "iscsi", 00:05:09.657 "config": [ 00:05:09.657 { 00:05:09.657 "method": "iscsi_set_options", 00:05:09.657 "params": { 00:05:09.657 "allow_duplicated_isid": false, 00:05:09.657 "chap_group": 0, 00:05:09.657 "data_out_pool_size": 2048, 00:05:09.657 "default_time2retain": 20, 00:05:09.657 "default_time2wait": 2, 00:05:09.657 "disable_chap": false, 00:05:09.657 "error_recovery_level": 0, 00:05:09.657 "first_burst_length": 8192, 00:05:09.657 "immediate_data": true, 00:05:09.657 "immediate_data_pool_size": 16384, 00:05:09.657 "max_connections_per_session": 2, 00:05:09.657 "max_large_datain_per_connection": 64, 00:05:09.657 "max_queue_depth": 64, 00:05:09.657 "max_r2t_per_connection": 4, 00:05:09.657 "max_sessions": 128, 00:05:09.657 "mutual_chap": false, 00:05:09.657 "node_base": "iqn.2016-06.io.spdk", 00:05:09.657 "nop_in_interval": 30, 00:05:09.657 "nop_timeout": 60, 00:05:09.657 "pdu_pool_size": 36864, 00:05:09.657 "require_chap": false 00:05:09.657 } 00:05:09.657 } 00:05:09.657 ] 00:05:09.657 } 00:05:09.657 ] 00:05:09.657 } 00:05:09.657 21:59:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:09.657 21:59:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61064 00:05:09.657 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61064 ']' 00:05:09.657 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61064 00:05:09.657 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:09.657 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.657 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61064 00:05:09.657 killing process with pid 61064 00:05:09.657 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.657 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.657 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61064' 00:05:09.657 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61064 00:05:09.657 21:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61064 00:05:09.915 21:59:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61090 00:05:09.915 21:59:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:09.915 21:59:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:15.176 22:00:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61090 00:05:15.176 22:00:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61090 ']' 00:05:15.176 22:00:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61090 00:05:15.176 22:00:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:15.176 22:00:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.176 22:00:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61090 00:05:15.176 killing process with pid 61090 00:05:15.176 22:00:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.176 22:00:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.176 22:00:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61090' 00:05:15.176 22:00:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61090 00:05:15.176 22:00:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61090 00:05:15.176 22:00:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:15.176 22:00:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:15.176 ************************************ 00:05:15.176 END TEST skip_rpc_with_json 00:05:15.176 ************************************ 00:05:15.176 00:05:15.176 real 0m6.179s 00:05:15.176 user 0m5.927s 00:05:15.176 sys 0m0.410s 00:05:15.176 22:00:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.176 22:00:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.176 22:00:02 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:15.176 22:00:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:15.176 22:00:02 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.176 22:00:02 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.176 22:00:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.176 ************************************ 00:05:15.176 START TEST skip_rpc_with_delay 00:05:15.176 ************************************ 00:05:15.176 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:15.176 22:00:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:15.176 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:15.177 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:15.177 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.177 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.177 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.177 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.177 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.177 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.177 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.177 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:15.177 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:15.433 [2024-07-15 22:00:02.150096] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:15.433 [2024-07-15 22:00:02.150280] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:15.433 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:15.433 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:15.433 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:15.433 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:15.433 00:05:15.433 real 0m0.095s 00:05:15.433 user 0m0.057s 00:05:15.433 sys 0m0.036s 00:05:15.433 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.433 ************************************ 00:05:15.433 END TEST skip_rpc_with_delay 00:05:15.433 ************************************ 00:05:15.433 22:00:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:15.433 22:00:02 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:15.433 22:00:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:15.433 22:00:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:15.433 22:00:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:15.433 22:00:02 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.433 22:00:02 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.433 22:00:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.433 ************************************ 00:05:15.433 START TEST exit_on_failed_rpc_init 00:05:15.433 ************************************ 00:05:15.433 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:15.433 22:00:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61200 00:05:15.433 22:00:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.433 22:00:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61200 00:05:15.433 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 61200 ']' 00:05:15.433 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.433 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.433 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.433 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.433 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.433 [2024-07-15 22:00:02.305969] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:15.433 [2024-07-15 22:00:02.306291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61200 ] 00:05:15.691 [2024-07-15 22:00:02.445665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.691 [2024-07-15 22:00:02.516158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:15.950 22:00:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.950 [2024-07-15 22:00:02.774251] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:15.950 [2024-07-15 22:00:02.774356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61216 ] 00:05:16.208 [2024-07-15 22:00:02.916673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.208 [2024-07-15 22:00:03.023687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.208 [2024-07-15 22:00:03.023865] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:16.208 [2024-07-15 22:00:03.023896] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:16.208 [2024-07-15 22:00:03.023913] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61200 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 61200 ']' 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 61200 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61200 00:05:16.475 killing process with pid 61200 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61200' 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 61200 00:05:16.475 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 61200 00:05:16.736 ************************************ 00:05:16.737 END TEST exit_on_failed_rpc_init 00:05:16.737 ************************************ 00:05:16.737 00:05:16.737 real 0m1.207s 00:05:16.737 user 0m1.455s 00:05:16.737 sys 0m0.333s 00:05:16.737 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.737 22:00:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:16.737 22:00:03 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.737 22:00:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:16.737 ************************************ 00:05:16.737 END TEST skip_rpc 00:05:16.737 ************************************ 00:05:16.737 00:05:16.737 real 0m13.036s 00:05:16.737 user 0m12.556s 00:05:16.737 sys 0m1.102s 00:05:16.737 22:00:03 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.737 22:00:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.737 22:00:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:16.737 22:00:03 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:16.737 22:00:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.737 22:00:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.737 22:00:03 -- common/autotest_common.sh@10 -- # set +x 00:05:16.737 ************************************ 00:05:16.737 START TEST rpc_client 00:05:16.737 ************************************ 00:05:16.737 22:00:03 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:16.737 * Looking for test storage... 00:05:16.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:16.737 22:00:03 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:16.737 OK 00:05:16.737 22:00:03 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:16.737 00:05:16.737 real 0m0.096s 00:05:16.737 user 0m0.047s 00:05:16.737 sys 0m0.053s 00:05:16.737 ************************************ 00:05:16.737 END TEST rpc_client 00:05:16.737 ************************************ 00:05:16.737 22:00:03 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.737 22:00:03 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:16.737 22:00:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:16.737 22:00:03 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:16.737 22:00:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.737 22:00:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.737 22:00:03 -- common/autotest_common.sh@10 -- # set +x 00:05:16.737 ************************************ 00:05:16.737 START TEST json_config 00:05:16.737 ************************************ 00:05:16.737 22:00:03 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:16.997 22:00:03 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:16.997 22:00:03 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.997 22:00:03 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.997 22:00:03 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.997 22:00:03 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.997 22:00:03 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.997 22:00:03 json_config -- paths/export.sh@5 -- # export PATH 00:05:16.997 22:00:03 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@47 -- # : 0 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:16.997 22:00:03 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:16.997 INFO: JSON configuration test init 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:16.997 22:00:03 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.997 22:00:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:16.997 22:00:03 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.997 22:00:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.997 Waiting for target to run... 00:05:16.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.997 22:00:03 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:16.997 22:00:03 json_config -- json_config/common.sh@9 -- # local app=target 00:05:16.997 22:00:03 json_config -- json_config/common.sh@10 -- # shift 00:05:16.997 22:00:03 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:16.997 22:00:03 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:16.997 22:00:03 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:16.997 22:00:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.997 22:00:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.997 22:00:03 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61334 00:05:16.997 22:00:03 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:16.997 22:00:03 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:16.997 22:00:03 json_config -- json_config/common.sh@25 -- # waitforlisten 61334 /var/tmp/spdk_tgt.sock 00:05:16.997 22:00:03 json_config -- common/autotest_common.sh@829 -- # '[' -z 61334 ']' 00:05:16.997 22:00:03 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.997 22:00:03 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.997 22:00:03 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.998 22:00:03 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.998 22:00:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.998 [2024-07-15 22:00:03.814634] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:16.998 [2024-07-15 22:00:03.815021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61334 ] 00:05:17.256 [2024-07-15 22:00:04.122114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.256 [2024-07-15 22:00:04.169347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.192 22:00:04 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.192 22:00:04 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:18.192 22:00:04 json_config -- json_config/common.sh@26 -- # echo '' 00:05:18.192 00:05:18.192 22:00:04 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:18.192 22:00:04 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:18.192 22:00:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:18.192 22:00:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.192 22:00:04 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:18.192 22:00:04 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:18.192 22:00:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.192 22:00:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.192 22:00:04 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:18.192 22:00:04 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:18.192 22:00:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:18.452 22:00:05 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:18.452 22:00:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:18.452 22:00:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:18.452 22:00:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.452 22:00:05 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:18.452 22:00:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:18.452 22:00:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:18.452 22:00:05 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:18.452 22:00:05 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:18.452 22:00:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:18.710 22:00:05 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:18.710 22:00:05 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:18.710 22:00:05 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:18.710 22:00:05 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:18.710 22:00:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.710 22:00:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.710 22:00:05 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:18.710 22:00:05 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:18.710 22:00:05 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:18.710 22:00:05 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:18.710 22:00:05 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:18.710 22:00:05 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:18.710 22:00:05 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:18.710 22:00:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:18.710 22:00:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.710 22:00:05 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:18.710 22:00:05 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:18.711 22:00:05 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:18.711 22:00:05 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:18.711 22:00:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:18.970 MallocForNvmf0 00:05:18.970 22:00:05 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:18.970 22:00:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:19.229 MallocForNvmf1 00:05:19.229 22:00:06 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:19.229 22:00:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:19.488 [2024-07-15 22:00:06.317832] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.488 22:00:06 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.488 22:00:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.746 22:00:06 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.746 22:00:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:20.004 22:00:06 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:20.004 22:00:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:20.573 22:00:07 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:20.573 22:00:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:20.832 [2024-07-15 22:00:07.550549] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:20.832 22:00:07 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:20.832 22:00:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:20.832 22:00:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.832 22:00:07 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:20.832 22:00:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:20.832 22:00:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.832 22:00:07 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:20.832 22:00:07 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:20.832 22:00:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:21.091 MallocBdevForConfigChangeCheck 00:05:21.091 22:00:07 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:21.091 22:00:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:21.091 22:00:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.091 22:00:07 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:21.091 22:00:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.658 INFO: shutting down applications... 00:05:21.658 22:00:08 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:21.658 22:00:08 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:21.658 22:00:08 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:21.658 22:00:08 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:21.658 22:00:08 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:21.916 Calling clear_iscsi_subsystem 00:05:21.916 Calling clear_nvmf_subsystem 00:05:21.916 Calling clear_nbd_subsystem 00:05:21.916 Calling clear_ublk_subsystem 00:05:21.916 Calling clear_vhost_blk_subsystem 00:05:21.916 Calling clear_vhost_scsi_subsystem 00:05:21.916 Calling clear_bdev_subsystem 00:05:21.916 22:00:08 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:21.916 22:00:08 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:21.916 22:00:08 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:21.916 22:00:08 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.916 22:00:08 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:21.916 22:00:08 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:22.481 22:00:09 json_config -- json_config/json_config.sh@345 -- # break 00:05:22.481 22:00:09 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:22.481 22:00:09 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:22.481 22:00:09 json_config -- json_config/common.sh@31 -- # local app=target 00:05:22.481 22:00:09 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:22.481 22:00:09 json_config -- json_config/common.sh@35 -- # [[ -n 61334 ]] 00:05:22.481 22:00:09 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61334 00:05:22.481 22:00:09 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:22.481 22:00:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.481 22:00:09 json_config -- json_config/common.sh@41 -- # kill -0 61334 00:05:22.481 22:00:09 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.739 22:00:09 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.739 22:00:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.739 22:00:09 json_config -- json_config/common.sh@41 -- # kill -0 61334 00:05:22.739 22:00:09 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:22.739 22:00:09 json_config -- json_config/common.sh@43 -- # break 00:05:22.739 22:00:09 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:22.739 SPDK target shutdown done 00:05:22.739 22:00:09 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:22.739 INFO: relaunching applications... 00:05:22.739 22:00:09 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:22.739 22:00:09 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.739 22:00:09 json_config -- json_config/common.sh@9 -- # local app=target 00:05:22.739 22:00:09 json_config -- json_config/common.sh@10 -- # shift 00:05:22.739 22:00:09 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:22.739 22:00:09 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:22.739 22:00:09 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:22.739 22:00:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.739 22:00:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.739 22:00:09 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61614 00:05:22.739 22:00:09 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:22.739 Waiting for target to run... 00:05:22.739 22:00:09 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:22.739 22:00:09 json_config -- json_config/common.sh@25 -- # waitforlisten 61614 /var/tmp/spdk_tgt.sock 00:05:22.739 22:00:09 json_config -- common/autotest_common.sh@829 -- # '[' -z 61614 ']' 00:05:22.739 22:00:09 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:22.739 22:00:09 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:22.739 22:00:09 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:22.739 22:00:09 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.739 22:00:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.997 [2024-07-15 22:00:09.721424] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:22.997 [2024-07-15 22:00:09.721527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61614 ] 00:05:23.255 [2024-07-15 22:00:10.024420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.255 [2024-07-15 22:00:10.080155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.512 [2024-07-15 22:00:10.404173] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:23.512 [2024-07-15 22:00:10.436280] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:24.077 22:00:10 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.077 22:00:10 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:24.077 00:05:24.077 22:00:10 json_config -- json_config/common.sh@26 -- # echo '' 00:05:24.077 22:00:10 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:24.077 INFO: Checking if target configuration is the same... 00:05:24.077 22:00:10 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:24.077 22:00:10 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:24.077 22:00:10 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:24.077 22:00:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.077 + '[' 2 -ne 2 ']' 00:05:24.077 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:24.077 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:24.077 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:24.077 +++ basename /dev/fd/62 00:05:24.077 ++ mktemp /tmp/62.XXX 00:05:24.077 + tmp_file_1=/tmp/62.7H7 00:05:24.077 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:24.077 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:24.077 + tmp_file_2=/tmp/spdk_tgt_config.json.LFy 00:05:24.077 + ret=0 00:05:24.077 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:24.334 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:24.334 + diff -u /tmp/62.7H7 /tmp/spdk_tgt_config.json.LFy 00:05:24.334 INFO: JSON config files are the same 00:05:24.334 + echo 'INFO: JSON config files are the same' 00:05:24.334 + rm /tmp/62.7H7 /tmp/spdk_tgt_config.json.LFy 00:05:24.334 + exit 0 00:05:24.334 22:00:11 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:24.334 INFO: changing configuration and checking if this can be detected... 00:05:24.334 22:00:11 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:24.334 22:00:11 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:24.334 22:00:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:24.592 22:00:11 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:24.592 22:00:11 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:24.592 22:00:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.592 + '[' 2 -ne 2 ']' 00:05:24.592 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:24.592 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:24.592 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:24.592 +++ basename /dev/fd/62 00:05:24.592 ++ mktemp /tmp/62.XXX 00:05:24.592 + tmp_file_1=/tmp/62.DEf 00:05:24.592 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:24.592 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:24.592 + tmp_file_2=/tmp/spdk_tgt_config.json.RHh 00:05:24.592 + ret=0 00:05:24.592 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:25.162 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:25.162 + diff -u /tmp/62.DEf /tmp/spdk_tgt_config.json.RHh 00:05:25.162 + ret=1 00:05:25.162 + echo '=== Start of file: /tmp/62.DEf ===' 00:05:25.162 + cat /tmp/62.DEf 00:05:25.162 + echo '=== End of file: /tmp/62.DEf ===' 00:05:25.162 + echo '' 00:05:25.162 + echo '=== Start of file: /tmp/spdk_tgt_config.json.RHh ===' 00:05:25.162 + cat /tmp/spdk_tgt_config.json.RHh 00:05:25.162 + echo '=== End of file: /tmp/spdk_tgt_config.json.RHh ===' 00:05:25.162 + echo '' 00:05:25.162 + rm /tmp/62.DEf /tmp/spdk_tgt_config.json.RHh 00:05:25.162 + exit 1 00:05:25.162 INFO: configuration change detected. 00:05:25.162 22:00:11 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:25.162 22:00:11 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:25.162 22:00:11 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:25.162 22:00:11 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.162 22:00:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.162 22:00:12 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:25.162 22:00:12 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:25.162 22:00:12 json_config -- json_config/json_config.sh@317 -- # [[ -n 61614 ]] 00:05:25.162 22:00:12 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:25.162 22:00:12 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:25.162 22:00:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.162 22:00:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.162 22:00:12 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:25.162 22:00:12 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:25.162 22:00:12 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:25.162 22:00:12 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:25.162 22:00:12 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:25.162 22:00:12 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:25.162 22:00:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.162 22:00:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.162 22:00:12 json_config -- json_config/json_config.sh@323 -- # killprocess 61614 00:05:25.162 22:00:12 json_config -- common/autotest_common.sh@948 -- # '[' -z 61614 ']' 00:05:25.162 22:00:12 json_config -- common/autotest_common.sh@952 -- # kill -0 61614 00:05:25.162 22:00:12 json_config -- common/autotest_common.sh@953 -- # uname 00:05:25.162 22:00:12 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.162 22:00:12 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61614 00:05:25.162 22:00:12 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.162 22:00:12 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.162 22:00:12 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61614' 00:05:25.162 killing process with pid 61614 00:05:25.162 22:00:12 json_config -- common/autotest_common.sh@967 -- # kill 61614 00:05:25.162 22:00:12 json_config -- common/autotest_common.sh@972 -- # wait 61614 00:05:25.419 22:00:12 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:25.419 22:00:12 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:25.419 22:00:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.419 22:00:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.419 INFO: Success 00:05:25.419 22:00:12 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:25.419 22:00:12 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:25.419 00:05:25.419 real 0m8.653s 00:05:25.419 user 0m12.752s 00:05:25.419 sys 0m1.587s 00:05:25.419 22:00:12 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.419 22:00:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.419 ************************************ 00:05:25.419 END TEST json_config 00:05:25.419 ************************************ 00:05:25.419 22:00:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.420 22:00:12 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:25.420 22:00:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.420 22:00:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.420 22:00:12 -- common/autotest_common.sh@10 -- # set +x 00:05:25.420 ************************************ 00:05:25.420 START TEST json_config_extra_key 00:05:25.420 ************************************ 00:05:25.420 22:00:12 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:25.677 22:00:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:25.677 22:00:12 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.677 22:00:12 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.677 22:00:12 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.677 22:00:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.677 22:00:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.677 22:00:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.677 22:00:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:25.677 22:00:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:25.677 22:00:12 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:25.677 22:00:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:25.677 22:00:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:25.677 22:00:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:25.677 22:00:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:25.677 22:00:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:25.677 22:00:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:25.677 22:00:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:25.677 22:00:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:25.677 22:00:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:25.677 22:00:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:25.677 INFO: launching applications... 00:05:25.677 22:00:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:25.677 22:00:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:25.677 22:00:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:25.677 22:00:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:25.677 22:00:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:25.677 22:00:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:25.677 22:00:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:25.677 22:00:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.677 22:00:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.677 22:00:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61785 00:05:25.677 Waiting for target to run... 00:05:25.677 22:00:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:25.677 22:00:12 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:25.677 22:00:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61785 /var/tmp/spdk_tgt.sock 00:05:25.677 22:00:12 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61785 ']' 00:05:25.677 22:00:12 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.677 22:00:12 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.677 22:00:12 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.678 22:00:12 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.678 22:00:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:25.678 [2024-07-15 22:00:12.498570] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:25.678 [2024-07-15 22:00:12.498673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61785 ] 00:05:25.936 [2024-07-15 22:00:12.798812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.936 [2024-07-15 22:00:12.855986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.868 22:00:13 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.868 22:00:13 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:26.868 00:05:26.868 22:00:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:26.868 INFO: shutting down applications... 00:05:26.868 22:00:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:26.868 22:00:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:26.868 22:00:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:26.868 22:00:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:26.868 22:00:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61785 ]] 00:05:26.868 22:00:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61785 00:05:26.868 22:00:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:26.868 22:00:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.868 22:00:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61785 00:05:26.868 22:00:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.125 22:00:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.125 22:00:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.125 22:00:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61785 00:05:27.125 22:00:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:27.125 22:00:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:27.125 22:00:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:27.125 SPDK target shutdown done 00:05:27.125 22:00:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:27.125 Success 00:05:27.125 22:00:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:27.125 00:05:27.125 real 0m1.678s 00:05:27.125 user 0m1.562s 00:05:27.125 sys 0m0.301s 00:05:27.125 22:00:14 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.125 22:00:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.125 ************************************ 00:05:27.125 END TEST json_config_extra_key 00:05:27.125 ************************************ 00:05:27.125 22:00:14 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.125 22:00:14 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.125 22:00:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.125 22:00:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.125 22:00:14 -- common/autotest_common.sh@10 -- # set +x 00:05:27.125 ************************************ 00:05:27.125 START TEST alias_rpc 00:05:27.125 ************************************ 00:05:27.126 22:00:14 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:27.383 * Looking for test storage... 00:05:27.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:27.383 22:00:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:27.383 22:00:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61861 00:05:27.383 22:00:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61861 00:05:27.383 22:00:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.383 22:00:14 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61861 ']' 00:05:27.383 22:00:14 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.383 22:00:14 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.383 22:00:14 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.383 22:00:14 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.383 22:00:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.383 [2024-07-15 22:00:14.219229] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:27.383 [2024-07-15 22:00:14.219341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61861 ] 00:05:27.641 [2024-07-15 22:00:14.356309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.641 [2024-07-15 22:00:14.427574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.574 22:00:15 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.574 22:00:15 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:28.574 22:00:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:28.832 22:00:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61861 00:05:28.832 22:00:15 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61861 ']' 00:05:28.832 22:00:15 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61861 00:05:28.832 22:00:15 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:28.832 22:00:15 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.832 22:00:15 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61861 00:05:28.832 22:00:15 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.832 22:00:15 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.832 22:00:15 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61861' 00:05:28.832 killing process with pid 61861 00:05:28.832 22:00:15 alias_rpc -- common/autotest_common.sh@967 -- # kill 61861 00:05:28.832 22:00:15 alias_rpc -- common/autotest_common.sh@972 -- # wait 61861 00:05:29.090 00:05:29.090 real 0m1.762s 00:05:29.090 user 0m2.193s 00:05:29.090 sys 0m0.339s 00:05:29.090 22:00:15 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.090 22:00:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.090 ************************************ 00:05:29.090 END TEST alias_rpc 00:05:29.090 ************************************ 00:05:29.090 22:00:15 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.090 22:00:15 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:05:29.090 22:00:15 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:29.090 22:00:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.090 22:00:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.090 22:00:15 -- common/autotest_common.sh@10 -- # set +x 00:05:29.090 ************************************ 00:05:29.090 START TEST dpdk_mem_utility 00:05:29.090 ************************************ 00:05:29.090 22:00:15 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:29.090 * Looking for test storage... 00:05:29.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:29.090 22:00:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:29.090 22:00:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61953 00:05:29.090 22:00:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.090 22:00:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61953 00:05:29.090 22:00:15 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61953 ']' 00:05:29.090 22:00:15 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.090 22:00:15 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.090 22:00:15 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.090 22:00:15 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.090 22:00:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.348 [2024-07-15 22:00:16.061385] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:29.348 [2024-07-15 22:00:16.061476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61953 ] 00:05:29.348 [2024-07-15 22:00:16.199434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.348 [2024-07-15 22:00:16.270254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.281 22:00:16 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.281 22:00:16 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:30.281 22:00:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:30.281 22:00:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:30.281 22:00:16 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.281 22:00:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.281 { 00:05:30.281 "filename": "/tmp/spdk_mem_dump.txt" 00:05:30.281 } 00:05:30.281 22:00:16 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.281 22:00:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:30.281 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:30.281 1 heaps totaling size 814.000000 MiB 00:05:30.281 size: 814.000000 MiB heap id: 0 00:05:30.281 end heaps---------- 00:05:30.281 8 mempools totaling size 598.116089 MiB 00:05:30.281 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:30.281 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:30.281 size: 84.521057 MiB name: bdev_io_61953 00:05:30.281 size: 51.011292 MiB name: evtpool_61953 00:05:30.281 size: 50.003479 MiB name: msgpool_61953 00:05:30.281 size: 21.763794 MiB name: PDU_Pool 00:05:30.281 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:30.281 size: 0.026123 MiB name: Session_Pool 00:05:30.281 end mempools------- 00:05:30.281 6 memzones totaling size 4.142822 MiB 00:05:30.281 size: 1.000366 MiB name: RG_ring_0_61953 00:05:30.281 size: 1.000366 MiB name: RG_ring_1_61953 00:05:30.281 size: 1.000366 MiB name: RG_ring_4_61953 00:05:30.281 size: 1.000366 MiB name: RG_ring_5_61953 00:05:30.281 size: 0.125366 MiB name: RG_ring_2_61953 00:05:30.281 size: 0.015991 MiB name: RG_ring_3_61953 00:05:30.282 end memzones------- 00:05:30.282 22:00:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:30.282 heap id: 0 total size: 814.000000 MiB number of busy elements: 218 number of free elements: 15 00:05:30.282 list of free elements. size: 12.486938 MiB 00:05:30.282 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:30.282 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:30.282 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:30.282 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:30.282 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:30.282 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:30.282 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:30.282 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:30.282 element at address: 0x200000200000 with size: 0.837036 MiB 00:05:30.282 element at address: 0x20001aa00000 with size: 0.572998 MiB 00:05:30.282 element at address: 0x20000b200000 with size: 0.489807 MiB 00:05:30.282 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:30.282 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:30.282 element at address: 0x200027e00000 with size: 0.398499 MiB 00:05:30.282 element at address: 0x200003a00000 with size: 0.350769 MiB 00:05:30.282 list of standard malloc elements. size: 199.250488 MiB 00:05:30.282 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:30.282 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:30.282 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:30.282 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:30.282 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:30.282 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:30.282 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:30.282 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:30.282 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:30.282 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:30.282 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:30.283 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:30.283 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:30.283 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:30.283 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e66040 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e66100 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6cd00 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:30.283 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:30.283 list of memzone associated elements. size: 602.262573 MiB 00:05:30.283 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:30.283 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:30.283 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:30.283 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:30.283 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:30.283 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61953_0 00:05:30.283 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:30.283 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61953_0 00:05:30.283 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:30.283 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61953_0 00:05:30.283 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:30.283 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:30.283 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:30.283 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:30.283 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:30.283 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61953 00:05:30.283 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:30.283 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61953 00:05:30.283 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:30.283 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61953 00:05:30.283 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:30.283 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:30.283 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:30.283 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:30.283 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:30.283 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:30.283 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:30.283 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:30.283 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:30.283 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61953 00:05:30.283 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:30.283 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61953 00:05:30.283 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:30.283 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61953 00:05:30.283 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:30.283 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61953 00:05:30.283 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:30.283 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61953 00:05:30.283 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:30.283 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:30.283 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:30.283 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:30.283 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:30.283 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:30.283 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:30.283 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61953 00:05:30.283 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:30.283 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:30.283 element at address: 0x200027e661c0 with size: 0.023743 MiB 00:05:30.283 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:30.283 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:30.283 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61953 00:05:30.283 element at address: 0x200027e6c300 with size: 0.002441 MiB 00:05:30.283 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:30.283 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:30.283 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61953 00:05:30.283 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:30.283 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61953 00:05:30.283 element at address: 0x200027e6cdc0 with size: 0.000305 MiB 00:05:30.283 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:30.283 22:00:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:30.283 22:00:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61953 00:05:30.283 22:00:17 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61953 ']' 00:05:30.283 22:00:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61953 00:05:30.283 22:00:17 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:30.283 22:00:17 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.283 22:00:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61953 00:05:30.283 22:00:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.283 22:00:17 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.283 killing process with pid 61953 00:05:30.283 22:00:17 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61953' 00:05:30.283 22:00:17 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61953 00:05:30.283 22:00:17 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61953 00:05:30.542 00:05:30.542 real 0m1.504s 00:05:30.542 user 0m1.694s 00:05:30.542 sys 0m0.336s 00:05:30.542 22:00:17 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.542 22:00:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.542 ************************************ 00:05:30.542 END TEST dpdk_mem_utility 00:05:30.542 ************************************ 00:05:30.542 22:00:17 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.542 22:00:17 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:30.542 22:00:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.542 22:00:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.542 22:00:17 -- common/autotest_common.sh@10 -- # set +x 00:05:30.542 ************************************ 00:05:30.542 START TEST event 00:05:30.542 ************************************ 00:05:30.542 22:00:17 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:30.801 * Looking for test storage... 00:05:30.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:30.801 22:00:17 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:30.801 22:00:17 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:30.801 22:00:17 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.801 22:00:17 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:30.801 22:00:17 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.801 22:00:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.801 ************************************ 00:05:30.801 START TEST event_perf 00:05:30.801 ************************************ 00:05:30.801 22:00:17 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.801 Running I/O for 1 seconds...[2024-07-15 22:00:17.544780] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:30.801 [2024-07-15 22:00:17.544862] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62043 ] 00:05:30.801 [2024-07-15 22:00:17.683627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.059 [2024-07-15 22:00:17.760915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.059 [2024-07-15 22:00:17.761051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.059 Running I/O for 1 seconds...[2024-07-15 22:00:17.761334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.059 [2024-07-15 22:00:17.761150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.991 00:05:31.991 lcore 0: 192722 00:05:31.991 lcore 1: 192721 00:05:31.991 lcore 2: 192720 00:05:31.991 lcore 3: 192720 00:05:31.991 done. 00:05:31.991 00:05:31.991 real 0m1.321s 00:05:31.991 user 0m4.147s 00:05:31.991 sys 0m0.049s 00:05:31.992 22:00:18 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.992 22:00:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.992 ************************************ 00:05:31.992 END TEST event_perf 00:05:31.992 ************************************ 00:05:31.992 22:00:18 event -- common/autotest_common.sh@1142 -- # return 0 00:05:31.992 22:00:18 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:31.992 22:00:18 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:31.992 22:00:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.992 22:00:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.992 ************************************ 00:05:31.992 START TEST event_reactor 00:05:31.992 ************************************ 00:05:31.992 22:00:18 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:31.992 [2024-07-15 22:00:18.916628] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:31.992 [2024-07-15 22:00:18.916723] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62076 ] 00:05:32.249 [2024-07-15 22:00:19.051260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.249 [2024-07-15 22:00:19.111827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.667 test_start 00:05:33.667 oneshot 00:05:33.667 tick 100 00:05:33.667 tick 100 00:05:33.667 tick 250 00:05:33.667 tick 100 00:05:33.667 tick 100 00:05:33.667 tick 100 00:05:33.667 tick 250 00:05:33.667 tick 500 00:05:33.667 tick 100 00:05:33.667 tick 100 00:05:33.667 tick 250 00:05:33.667 tick 100 00:05:33.667 tick 100 00:05:33.667 test_end 00:05:33.667 00:05:33.667 real 0m1.295s 00:05:33.667 user 0m1.140s 00:05:33.667 sys 0m0.048s 00:05:33.667 22:00:20 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.667 ************************************ 00:05:33.667 END TEST event_reactor 00:05:33.667 22:00:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:33.667 ************************************ 00:05:33.667 22:00:20 event -- common/autotest_common.sh@1142 -- # return 0 00:05:33.667 22:00:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:33.667 22:00:20 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:33.667 22:00:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.667 22:00:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.667 ************************************ 00:05:33.667 START TEST event_reactor_perf 00:05:33.667 ************************************ 00:05:33.667 22:00:20 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:33.667 [2024-07-15 22:00:20.257006] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:33.667 [2024-07-15 22:00:20.257117] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62111 ] 00:05:33.667 [2024-07-15 22:00:20.389740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.667 [2024-07-15 22:00:20.471437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.600 test_start 00:05:34.600 test_end 00:05:34.600 Performance: 355549 events per second 00:05:34.600 00:05:34.600 real 0m1.305s 00:05:34.600 user 0m1.163s 00:05:34.600 sys 0m0.036s 00:05:34.600 22:00:21 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.600 22:00:21 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.600 ************************************ 00:05:34.600 END TEST event_reactor_perf 00:05:34.600 ************************************ 00:05:34.859 22:00:21 event -- common/autotest_common.sh@1142 -- # return 0 00:05:34.859 22:00:21 event -- event/event.sh@49 -- # uname -s 00:05:34.859 22:00:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:34.859 22:00:21 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:34.859 22:00:21 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.859 22:00:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.859 22:00:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.859 ************************************ 00:05:34.859 START TEST event_scheduler 00:05:34.859 ************************************ 00:05:34.859 22:00:21 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:34.859 * Looking for test storage... 00:05:34.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:34.859 22:00:21 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:34.859 22:00:21 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62173 00:05:34.859 22:00:21 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.859 22:00:21 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:34.859 22:00:21 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62173 00:05:34.859 22:00:21 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 62173 ']' 00:05:34.859 22:00:21 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.859 22:00:21 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.859 22:00:21 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.859 22:00:21 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.859 22:00:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.859 [2024-07-15 22:00:21.733298] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:34.859 [2024-07-15 22:00:21.733402] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62173 ] 00:05:35.117 [2024-07-15 22:00:21.874860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:35.117 [2024-07-15 22:00:21.959162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.117 [2024-07-15 22:00:21.959571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.117 [2024-07-15 22:00:21.960235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.117 [2024-07-15 22:00:21.960258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.117 22:00:21 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.117 22:00:21 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:35.117 22:00:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:35.117 22:00:21 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.117 22:00:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.117 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.117 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.117 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.117 POWER: Cannot set governor of lcore 0 to performance 00:05:35.117 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.117 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.117 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.117 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.117 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:35.117 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:35.117 POWER: Unable to set Power Management Environment for lcore 0 00:05:35.117 [2024-07-15 22:00:22.001410] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:35.117 [2024-07-15 22:00:22.001426] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:35.117 [2024-07-15 22:00:22.001436] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:35.117 [2024-07-15 22:00:22.001451] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:35.117 [2024-07-15 22:00:22.001460] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:35.117 [2024-07-15 22:00:22.001469] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:35.117 22:00:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.117 22:00:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:35.117 22:00:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.117 22:00:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.376 [2024-07-15 22:00:22.071523] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:35.376 22:00:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.376 22:00:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:35.376 22:00:22 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.376 22:00:22 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.376 22:00:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.376 ************************************ 00:05:35.376 START TEST scheduler_create_thread 00:05:35.376 ************************************ 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.376 2 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.376 3 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.376 4 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.376 5 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.376 6 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.376 7 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.376 8 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.376 9 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.376 10 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.376 22:00:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.751 22:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.751 22:00:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:36.751 22:00:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:36.751 22:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.751 22:00:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.123 ************************************ 00:05:38.123 END TEST scheduler_create_thread 00:05:38.123 ************************************ 00:05:38.123 22:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.123 00:05:38.123 real 0m2.613s 00:05:38.123 user 0m0.017s 00:05:38.123 sys 0m0.006s 00:05:38.123 22:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.123 22:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.123 22:00:24 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:38.123 22:00:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:38.123 22:00:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62173 00:05:38.123 22:00:24 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 62173 ']' 00:05:38.123 22:00:24 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 62173 00:05:38.123 22:00:24 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:38.123 22:00:24 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.123 22:00:24 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62173 00:05:38.123 killing process with pid 62173 00:05:38.123 22:00:24 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:38.123 22:00:24 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:38.123 22:00:24 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62173' 00:05:38.123 22:00:24 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 62173 00:05:38.123 22:00:24 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 62173 00:05:38.380 [2024-07-15 22:00:25.175633] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:38.637 00:05:38.638 real 0m3.759s 00:05:38.638 user 0m5.542s 00:05:38.638 sys 0m0.298s 00:05:38.638 22:00:25 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.638 ************************************ 00:05:38.638 END TEST event_scheduler 00:05:38.638 22:00:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.638 ************************************ 00:05:38.638 22:00:25 event -- common/autotest_common.sh@1142 -- # return 0 00:05:38.638 22:00:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:38.638 22:00:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:38.638 22:00:25 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.638 22:00:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.638 22:00:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.638 ************************************ 00:05:38.638 START TEST app_repeat 00:05:38.638 ************************************ 00:05:38.638 22:00:25 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:38.638 22:00:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.638 22:00:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.638 22:00:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:38.638 22:00:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.638 22:00:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:38.638 22:00:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:38.638 22:00:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:38.638 22:00:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62271 00:05:38.638 Process app_repeat pid: 62271 00:05:38.638 22:00:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.638 22:00:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62271' 00:05:38.638 22:00:25 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:38.638 22:00:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.638 spdk_app_start Round 0 00:05:38.638 22:00:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:38.638 22:00:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62271 /var/tmp/spdk-nbd.sock 00:05:38.638 22:00:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62271 ']' 00:05:38.638 22:00:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.638 22:00:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.638 22:00:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.638 22:00:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.638 22:00:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.638 [2024-07-15 22:00:25.439030] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:38.638 [2024-07-15 22:00:25.439130] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62271 ] 00:05:38.638 [2024-07-15 22:00:25.576485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.895 [2024-07-15 22:00:25.645183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.895 [2024-07-15 22:00:25.645199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.895 22:00:25 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.895 22:00:25 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:38.895 22:00:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.152 Malloc0 00:05:39.152 22:00:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.410 Malloc1 00:05:39.410 22:00:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.410 22:00:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.667 /dev/nbd0 00:05:39.667 22:00:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.667 22:00:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.667 22:00:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:39.667 22:00:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:39.667 22:00:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:39.667 22:00:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:39.667 22:00:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:39.667 22:00:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:39.667 22:00:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:39.667 22:00:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:39.667 22:00:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.667 1+0 records in 00:05:39.667 1+0 records out 00:05:39.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042017 s, 9.7 MB/s 00:05:39.667 22:00:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.667 22:00:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:39.667 22:00:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.667 22:00:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:39.667 22:00:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:39.667 22:00:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.667 22:00:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.667 22:00:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.232 /dev/nbd1 00:05:40.232 22:00:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.232 22:00:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.232 22:00:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:40.232 22:00:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:40.232 22:00:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:40.232 22:00:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:40.232 22:00:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:40.232 22:00:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:40.232 22:00:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:40.232 22:00:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:40.232 22:00:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.232 1+0 records in 00:05:40.232 1+0 records out 00:05:40.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333177 s, 12.3 MB/s 00:05:40.232 22:00:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.232 22:00:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:40.232 22:00:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.232 22:00:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:40.232 22:00:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:40.232 22:00:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.232 22:00:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.232 22:00:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.232 22:00:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.232 22:00:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.562 { 00:05:40.562 "bdev_name": "Malloc0", 00:05:40.562 "nbd_device": "/dev/nbd0" 00:05:40.562 }, 00:05:40.562 { 00:05:40.562 "bdev_name": "Malloc1", 00:05:40.562 "nbd_device": "/dev/nbd1" 00:05:40.562 } 00:05:40.562 ]' 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.562 { 00:05:40.562 "bdev_name": "Malloc0", 00:05:40.562 "nbd_device": "/dev/nbd0" 00:05:40.562 }, 00:05:40.562 { 00:05:40.562 "bdev_name": "Malloc1", 00:05:40.562 "nbd_device": "/dev/nbd1" 00:05:40.562 } 00:05:40.562 ]' 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.562 /dev/nbd1' 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.562 /dev/nbd1' 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.562 256+0 records in 00:05:40.562 256+0 records out 00:05:40.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00682876 s, 154 MB/s 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.562 256+0 records in 00:05:40.562 256+0 records out 00:05:40.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026844 s, 39.1 MB/s 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.562 256+0 records in 00:05:40.562 256+0 records out 00:05:40.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274503 s, 38.2 MB/s 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.562 22:00:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.844 22:00:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.844 22:00:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.844 22:00:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.844 22:00:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.844 22:00:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.844 22:00:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.844 22:00:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.844 22:00:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.844 22:00:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.844 22:00:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.101 22:00:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.101 22:00:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.101 22:00:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.101 22:00:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.101 22:00:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.101 22:00:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.101 22:00:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.101 22:00:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.101 22:00:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.101 22:00:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.101 22:00:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.666 22:00:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.666 22:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.666 22:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.666 22:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.667 22:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.667 22:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.667 22:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.667 22:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.667 22:00:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.667 22:00:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.667 22:00:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.667 22:00:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.667 22:00:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.925 22:00:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.925 [2024-07-15 22:00:28.819251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.183 [2024-07-15 22:00:28.882818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.183 [2024-07-15 22:00:28.882829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.183 [2024-07-15 22:00:28.914996] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.183 [2024-07-15 22:00:28.915109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.458 22:00:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.458 spdk_app_start Round 1 00:05:45.458 22:00:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:45.458 22:00:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62271 /var/tmp/spdk-nbd.sock 00:05:45.458 22:00:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62271 ']' 00:05:45.458 22:00:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.458 22:00:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.458 22:00:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.458 22:00:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.458 22:00:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.458 22:00:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.458 22:00:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:45.458 22:00:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.458 Malloc0 00:05:45.458 22:00:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.715 Malloc1 00:05:45.715 22:00:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.715 22:00:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.972 /dev/nbd0 00:05:45.972 22:00:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.972 22:00:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.972 22:00:32 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:45.972 22:00:32 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:45.972 22:00:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:45.972 22:00:32 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:45.972 22:00:32 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:45.972 22:00:32 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:45.972 22:00:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:45.972 22:00:32 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:45.972 22:00:32 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.972 1+0 records in 00:05:45.972 1+0 records out 00:05:45.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025959 s, 15.8 MB/s 00:05:45.972 22:00:32 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.972 22:00:32 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:45.972 22:00:32 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.972 22:00:32 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:45.972 22:00:32 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:45.972 22:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.972 22:00:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.972 22:00:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.229 /dev/nbd1 00:05:46.491 22:00:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.491 22:00:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.491 22:00:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:46.491 22:00:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:46.491 22:00:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.491 22:00:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.491 22:00:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:46.491 22:00:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:46.491 22:00:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:46.491 22:00:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:46.491 22:00:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.491 1+0 records in 00:05:46.491 1+0 records out 00:05:46.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386316 s, 10.6 MB/s 00:05:46.491 22:00:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.491 22:00:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:46.491 22:00:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.491 22:00:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:46.491 22:00:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:46.491 22:00:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.491 22:00:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.491 22:00:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.491 22:00:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.491 22:00:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.751 { 00:05:46.751 "bdev_name": "Malloc0", 00:05:46.751 "nbd_device": "/dev/nbd0" 00:05:46.751 }, 00:05:46.751 { 00:05:46.751 "bdev_name": "Malloc1", 00:05:46.751 "nbd_device": "/dev/nbd1" 00:05:46.751 } 00:05:46.751 ]' 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.751 { 00:05:46.751 "bdev_name": "Malloc0", 00:05:46.751 "nbd_device": "/dev/nbd0" 00:05:46.751 }, 00:05:46.751 { 00:05:46.751 "bdev_name": "Malloc1", 00:05:46.751 "nbd_device": "/dev/nbd1" 00:05:46.751 } 00:05:46.751 ]' 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.751 /dev/nbd1' 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.751 /dev/nbd1' 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.751 256+0 records in 00:05:46.751 256+0 records out 00:05:46.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0059934 s, 175 MB/s 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.751 256+0 records in 00:05:46.751 256+0 records out 00:05:46.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262951 s, 39.9 MB/s 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.751 256+0 records in 00:05:46.751 256+0 records out 00:05:46.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289931 s, 36.2 MB/s 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.751 22:00:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.008 22:00:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.008 22:00:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.008 22:00:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.008 22:00:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.008 22:00:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.008 22:00:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.008 22:00:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.008 22:00:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.008 22:00:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.008 22:00:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.265 22:00:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.265 22:00:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.265 22:00:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.265 22:00:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.265 22:00:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.265 22:00:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.265 22:00:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.265 22:00:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.265 22:00:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.265 22:00:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.265 22:00:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.521 22:00:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.521 22:00:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.521 22:00:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.796 22:00:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.796 22:00:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.796 22:00:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.796 22:00:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.796 22:00:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.796 22:00:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.796 22:00:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.796 22:00:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.796 22:00:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.796 22:00:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.084 22:00:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.084 [2024-07-15 22:00:34.949630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.084 [2024-07-15 22:00:35.009606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.084 [2024-07-15 22:00:35.009618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.341 [2024-07-15 22:00:35.041491] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.341 [2024-07-15 22:00:35.041554] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.884 22:00:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.884 22:00:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:50.884 spdk_app_start Round 2 00:05:50.884 22:00:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62271 /var/tmp/spdk-nbd.sock 00:05:50.884 22:00:37 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62271 ']' 00:05:50.884 22:00:37 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.884 22:00:37 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.884 22:00:37 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.884 22:00:37 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.884 22:00:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.449 22:00:38 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.449 22:00:38 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:51.449 22:00:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.706 Malloc0 00:05:51.706 22:00:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.964 Malloc1 00:05:51.964 22:00:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.964 22:00:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.223 /dev/nbd0 00:05:52.223 22:00:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.223 22:00:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.223 22:00:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:52.223 22:00:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:52.223 22:00:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.223 22:00:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.223 22:00:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:52.223 22:00:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:52.223 22:00:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.223 22:00:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.223 22:00:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.223 1+0 records in 00:05:52.223 1+0 records out 00:05:52.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029054 s, 14.1 MB/s 00:05:52.223 22:00:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.223 22:00:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:52.223 22:00:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.223 22:00:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.223 22:00:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:52.223 22:00:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.223 22:00:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.223 22:00:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.481 /dev/nbd1 00:05:52.481 22:00:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.481 22:00:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.481 22:00:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:52.481 22:00:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:52.481 22:00:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.481 22:00:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.481 22:00:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:52.481 22:00:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:52.481 22:00:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.481 22:00:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.481 22:00:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.481 1+0 records in 00:05:52.481 1+0 records out 00:05:52.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373411 s, 11.0 MB/s 00:05:52.481 22:00:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.481 22:00:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:52.481 22:00:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.481 22:00:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.481 22:00:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:52.481 22:00:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.481 22:00:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.481 22:00:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.481 22:00:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.481 22:00:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.046 22:00:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.046 { 00:05:53.046 "bdev_name": "Malloc0", 00:05:53.046 "nbd_device": "/dev/nbd0" 00:05:53.046 }, 00:05:53.046 { 00:05:53.046 "bdev_name": "Malloc1", 00:05:53.046 "nbd_device": "/dev/nbd1" 00:05:53.046 } 00:05:53.046 ]' 00:05:53.046 22:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.046 { 00:05:53.046 "bdev_name": "Malloc0", 00:05:53.046 "nbd_device": "/dev/nbd0" 00:05:53.046 }, 00:05:53.046 { 00:05:53.046 "bdev_name": "Malloc1", 00:05:53.046 "nbd_device": "/dev/nbd1" 00:05:53.046 } 00:05:53.046 ]' 00:05:53.046 22:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.046 22:00:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.046 /dev/nbd1' 00:05:53.046 22:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.046 /dev/nbd1' 00:05:53.046 22:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.046 22:00:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.046 22:00:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.047 256+0 records in 00:05:53.047 256+0 records out 00:05:53.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00886676 s, 118 MB/s 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.047 256+0 records in 00:05:53.047 256+0 records out 00:05:53.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264914 s, 39.6 MB/s 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.047 256+0 records in 00:05:53.047 256+0 records out 00:05:53.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266734 s, 39.3 MB/s 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.047 22:00:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.305 22:00:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.305 22:00:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.305 22:00:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.305 22:00:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.305 22:00:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.305 22:00:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.305 22:00:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.305 22:00:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.305 22:00:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.305 22:00:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.569 22:00:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.569 22:00:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.569 22:00:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.569 22:00:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.569 22:00:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.569 22:00:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.569 22:00:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.569 22:00:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.569 22:00:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.569 22:00:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.569 22:00:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.825 22:00:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.825 22:00:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.825 22:00:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.825 22:00:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.825 22:00:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.825 22:00:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.825 22:00:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.825 22:00:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.825 22:00:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.825 22:00:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.825 22:00:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.825 22:00:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.825 22:00:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.390 22:00:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.390 [2024-07-15 22:00:41.212460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.390 [2024-07-15 22:00:41.273342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.390 [2024-07-15 22:00:41.273352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.390 [2024-07-15 22:00:41.304159] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.390 [2024-07-15 22:00:41.304213] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.693 22:00:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62271 /var/tmp/spdk-nbd.sock 00:05:57.693 22:00:44 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62271 ']' 00:05:57.693 22:00:44 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.693 22:00:44 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.693 22:00:44 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.693 22:00:44 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.693 22:00:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.693 22:00:44 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.693 22:00:44 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:57.693 22:00:44 event.app_repeat -- event/event.sh@39 -- # killprocess 62271 00:05:57.693 22:00:44 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 62271 ']' 00:05:57.694 22:00:44 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 62271 00:05:57.694 22:00:44 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:57.694 22:00:44 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.694 22:00:44 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62271 00:05:57.694 killing process with pid 62271 00:05:57.694 22:00:44 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.694 22:00:44 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.694 22:00:44 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62271' 00:05:57.694 22:00:44 event.app_repeat -- common/autotest_common.sh@967 -- # kill 62271 00:05:57.694 22:00:44 event.app_repeat -- common/autotest_common.sh@972 -- # wait 62271 00:05:57.694 spdk_app_start is called in Round 0. 00:05:57.694 Shutdown signal received, stop current app iteration 00:05:57.694 Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 reinitialization... 00:05:57.694 spdk_app_start is called in Round 1. 00:05:57.694 Shutdown signal received, stop current app iteration 00:05:57.694 Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 reinitialization... 00:05:57.694 spdk_app_start is called in Round 2. 00:05:57.694 Shutdown signal received, stop current app iteration 00:05:57.694 Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 reinitialization... 00:05:57.694 spdk_app_start is called in Round 3. 00:05:57.694 Shutdown signal received, stop current app iteration 00:05:57.694 22:00:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:57.694 22:00:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:57.694 00:05:57.694 real 0m19.155s 00:05:57.694 user 0m43.742s 00:05:57.694 sys 0m2.928s 00:05:57.694 22:00:44 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.694 22:00:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.694 ************************************ 00:05:57.694 END TEST app_repeat 00:05:57.694 ************************************ 00:05:57.694 22:00:44 event -- common/autotest_common.sh@1142 -- # return 0 00:05:57.694 22:00:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:57.694 22:00:44 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:57.694 22:00:44 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.694 22:00:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.694 22:00:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.694 ************************************ 00:05:57.694 START TEST cpu_locks 00:05:57.694 ************************************ 00:05:57.694 22:00:44 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:57.951 * Looking for test storage... 00:05:57.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:57.951 22:00:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:57.951 22:00:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:57.951 22:00:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:57.951 22:00:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:57.951 22:00:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.951 22:00:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.951 22:00:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.952 ************************************ 00:05:57.952 START TEST default_locks 00:05:57.952 ************************************ 00:05:57.952 22:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:57.952 22:00:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62889 00:05:57.952 22:00:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62889 00:05:57.952 22:00:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.952 22:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62889 ']' 00:05:57.952 22:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.952 22:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.952 22:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.952 22:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.952 22:00:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.952 [2024-07-15 22:00:44.766320] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:57.952 [2024-07-15 22:00:44.766429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62889 ] 00:05:58.209 [2024-07-15 22:00:44.903068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.209 [2024-07-15 22:00:44.974564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.145 22:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.145 22:00:45 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:59.145 22:00:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62889 00:05:59.145 22:00:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62889 00:05:59.145 22:00:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.404 22:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62889 00:05:59.404 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62889 ']' 00:05:59.404 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62889 00:05:59.404 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:59.404 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.404 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62889 00:05:59.404 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.404 killing process with pid 62889 00:05:59.404 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.404 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62889' 00:05:59.404 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62889 00:05:59.404 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62889 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62889 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62889 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62889 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62889 ']' 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.663 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62889) - No such process 00:05:59.663 ERROR: process (pid: 62889) is no longer running 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:59.663 00:05:59.663 real 0m1.735s 00:05:59.663 user 0m1.965s 00:05:59.663 sys 0m0.470s 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.663 22:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.663 ************************************ 00:05:59.663 END TEST default_locks 00:05:59.663 ************************************ 00:05:59.663 22:00:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.663 22:00:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:59.663 22:00:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.663 22:00:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.663 22:00:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.663 ************************************ 00:05:59.663 START TEST default_locks_via_rpc 00:05:59.663 ************************************ 00:05:59.663 22:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:59.663 22:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62953 00:05:59.663 22:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.663 22:00:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62953 00:05:59.663 22:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62953 ']' 00:05:59.663 22:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.663 22:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.663 22:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.663 22:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.663 22:00:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.663 [2024-07-15 22:00:46.542621] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:05:59.663 [2024-07-15 22:00:46.542715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62953 ] 00:05:59.922 [2024-07-15 22:00:46.683009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.922 [2024-07-15 22:00:46.753450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62953 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62953 00:06:00.853 22:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.115 22:00:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62953 00:06:01.115 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 62953 ']' 00:06:01.115 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 62953 00:06:01.115 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:01.115 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.115 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62953 00:06:01.115 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.115 killing process with pid 62953 00:06:01.115 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.115 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62953' 00:06:01.115 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 62953 00:06:01.115 22:00:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 62953 00:06:01.373 00:06:01.374 real 0m1.774s 00:06:01.374 user 0m2.039s 00:06:01.374 sys 0m0.465s 00:06:01.374 22:00:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.374 22:00:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.374 ************************************ 00:06:01.374 END TEST default_locks_via_rpc 00:06:01.374 ************************************ 00:06:01.374 22:00:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:01.374 22:00:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:01.374 22:00:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.374 22:00:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.374 22:00:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.374 ************************************ 00:06:01.374 START TEST non_locking_app_on_locked_coremask 00:06:01.374 ************************************ 00:06:01.374 22:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:01.374 22:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63022 00:06:01.374 22:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63022 /var/tmp/spdk.sock 00:06:01.374 22:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63022 ']' 00:06:01.374 22:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.374 22:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.374 22:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.374 22:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.374 22:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.374 22:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.631 [2024-07-15 22:00:48.371955] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:01.631 [2024-07-15 22:00:48.372056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63022 ] 00:06:01.631 [2024-07-15 22:00:48.509574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.631 [2024-07-15 22:00:48.571322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.563 22:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.563 22:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:02.563 22:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63050 00:06:02.563 22:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63050 /var/tmp/spdk2.sock 00:06:02.563 22:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63050 ']' 00:06:02.563 22:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.563 22:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.563 22:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.563 22:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.563 22:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:02.563 22:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.563 [2024-07-15 22:00:49.447114] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:02.563 [2024-07-15 22:00:49.447230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63050 ] 00:06:02.826 [2024-07-15 22:00:49.593845] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.826 [2024-07-15 22:00:49.593899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.826 [2024-07-15 22:00:49.715528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.788 22:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.788 22:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:03.788 22:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63022 00:06:03.788 22:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63022 00:06:03.788 22:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.355 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63022 00:06:04.355 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63022 ']' 00:06:04.355 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63022 00:06:04.355 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:04.355 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.355 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63022 00:06:04.355 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.355 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.355 killing process with pid 63022 00:06:04.355 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63022' 00:06:04.355 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63022 00:06:04.355 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63022 00:06:04.921 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63050 00:06:04.921 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63050 ']' 00:06:04.921 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63050 00:06:04.921 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:04.921 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.922 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63050 00:06:04.922 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.922 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.922 killing process with pid 63050 00:06:04.922 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63050' 00:06:04.922 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63050 00:06:04.922 22:00:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63050 00:06:05.180 00:06:05.180 real 0m3.718s 00:06:05.180 user 0m4.396s 00:06:05.180 sys 0m0.957s 00:06:05.180 22:00:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.180 ************************************ 00:06:05.180 END TEST non_locking_app_on_locked_coremask 00:06:05.180 ************************************ 00:06:05.180 22:00:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.180 22:00:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:05.180 22:00:52 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:05.180 22:00:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.180 22:00:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.180 22:00:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.180 ************************************ 00:06:05.180 START TEST locking_app_on_unlocked_coremask 00:06:05.180 ************************************ 00:06:05.180 22:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:05.180 22:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63124 00:06:05.180 22:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63124 /var/tmp/spdk.sock 00:06:05.180 22:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:05.180 22:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63124 ']' 00:06:05.180 22:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.180 22:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.180 22:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.180 22:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.180 22:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.180 [2024-07-15 22:00:52.122647] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:05.180 [2024-07-15 22:00:52.122735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63124 ] 00:06:05.438 [2024-07-15 22:00:52.258866] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.438 [2024-07-15 22:00:52.258916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.438 [2024-07-15 22:00:52.331137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.373 22:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.373 22:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:06.373 22:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63152 00:06:06.373 22:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.373 22:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63152 /var/tmp/spdk2.sock 00:06:06.373 22:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63152 ']' 00:06:06.373 22:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.373 22:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.373 22:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.373 22:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.373 22:00:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.373 [2024-07-15 22:00:53.195537] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:06.373 [2024-07-15 22:00:53.195656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63152 ] 00:06:06.631 [2024-07-15 22:00:53.341348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.631 [2024-07-15 22:00:53.464903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.565 22:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.565 22:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:07.565 22:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63152 00:06:07.565 22:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63152 00:06:07.565 22:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.499 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63124 00:06:08.499 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63124 ']' 00:06:08.499 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63124 00:06:08.499 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:08.499 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.499 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63124 00:06:08.499 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.499 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.499 killing process with pid 63124 00:06:08.499 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63124' 00:06:08.499 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63124 00:06:08.499 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63124 00:06:08.756 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63152 00:06:08.756 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63152 ']' 00:06:08.756 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63152 00:06:08.756 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:08.756 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.756 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63152 00:06:08.756 killing process with pid 63152 00:06:08.756 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.756 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.756 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63152' 00:06:08.756 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63152 00:06:08.756 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63152 00:06:09.013 ************************************ 00:06:09.013 END TEST locking_app_on_unlocked_coremask 00:06:09.013 ************************************ 00:06:09.013 00:06:09.013 real 0m3.831s 00:06:09.013 user 0m4.588s 00:06:09.013 sys 0m0.936s 00:06:09.013 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.013 22:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.013 22:00:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:09.013 22:00:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:09.013 22:00:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.013 22:00:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.013 22:00:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.013 ************************************ 00:06:09.013 START TEST locking_app_on_locked_coremask 00:06:09.013 ************************************ 00:06:09.013 22:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:09.013 22:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63225 00:06:09.013 22:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.013 22:00:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63225 /var/tmp/spdk.sock 00:06:09.013 22:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63225 ']' 00:06:09.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.013 22:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.013 22:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.013 22:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.013 22:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.013 22:00:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.269 [2024-07-15 22:00:55.997702] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:09.269 [2024-07-15 22:00:55.997790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63225 ] 00:06:09.269 [2024-07-15 22:00:56.131389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.269 [2024-07-15 22:00:56.190632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63240 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63240 /var/tmp/spdk2.sock 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63240 /var/tmp/spdk2.sock 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:09.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63240 /var/tmp/spdk2.sock 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63240 ']' 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.530 22:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.530 [2024-07-15 22:00:56.411318] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:09.530 [2024-07-15 22:00:56.411413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63240 ] 00:06:09.904 [2024-07-15 22:00:56.557523] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63225 has claimed it. 00:06:09.904 [2024-07-15 22:00:56.557595] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.160 ERROR: process (pid: 63240) is no longer running 00:06:10.160 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63240) - No such process 00:06:10.160 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.160 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:10.160 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:10.160 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.160 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.160 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.160 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63225 00:06:10.160 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63225 00:06:10.160 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.724 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63225 00:06:10.725 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63225 ']' 00:06:10.725 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63225 00:06:10.725 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:10.725 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.725 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63225 00:06:10.725 killing process with pid 63225 00:06:10.725 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.725 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.725 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63225' 00:06:10.725 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63225 00:06:10.725 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63225 00:06:10.982 ************************************ 00:06:10.982 END TEST locking_app_on_locked_coremask 00:06:10.982 ************************************ 00:06:10.982 00:06:10.982 real 0m1.915s 00:06:10.982 user 0m2.241s 00:06:10.982 sys 0m0.520s 00:06:10.982 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.982 22:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.982 22:00:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:10.982 22:00:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.982 22:00:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.982 22:00:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.982 22:00:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.982 ************************************ 00:06:10.982 START TEST locking_overlapped_coremask 00:06:10.982 ************************************ 00:06:10.982 22:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:10.982 22:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63291 00:06:10.982 22:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.982 22:00:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63291 /var/tmp/spdk.sock 00:06:10.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.982 22:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63291 ']' 00:06:10.982 22:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.982 22:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.982 22:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.982 22:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.982 22:00:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.239 [2024-07-15 22:00:57.973641] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:11.239 [2024-07-15 22:00:57.973742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63291 ] 00:06:11.239 [2024-07-15 22:00:58.113900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.239 [2024-07-15 22:00:58.175284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.239 [2024-07-15 22:00:58.175354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.239 [2024-07-15 22:00:58.175360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63322 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63322 /var/tmp/spdk2.sock 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63322 /var/tmp/spdk2.sock 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63322 /var/tmp/spdk2.sock 00:06:12.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63322 ']' 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.171 22:00:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.171 [2024-07-15 22:00:59.024204] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:12.171 [2024-07-15 22:00:59.024295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63322 ] 00:06:12.430 [2024-07-15 22:00:59.170937] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63291 has claimed it. 00:06:12.430 [2024-07-15 22:00:59.171033] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.997 ERROR: process (pid: 63322) is no longer running 00:06:12.997 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63322) - No such process 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63291 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 63291 ']' 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 63291 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63291 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63291' 00:06:12.997 killing process with pid 63291 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 63291 00:06:12.997 22:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 63291 00:06:13.257 00:06:13.257 real 0m2.159s 00:06:13.257 user 0m6.231s 00:06:13.257 sys 0m0.338s 00:06:13.257 22:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.257 22:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.257 ************************************ 00:06:13.257 END TEST locking_overlapped_coremask 00:06:13.257 ************************************ 00:06:13.257 22:01:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:13.257 22:01:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:13.257 22:01:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.257 22:01:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.257 22:01:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.257 ************************************ 00:06:13.257 START TEST locking_overlapped_coremask_via_rpc 00:06:13.257 ************************************ 00:06:13.257 22:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:13.257 22:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:13.257 22:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63368 00:06:13.257 22:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63368 /var/tmp/spdk.sock 00:06:13.257 22:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63368 ']' 00:06:13.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.257 22:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.257 22:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.257 22:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.257 22:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.257 22:01:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.257 [2024-07-15 22:01:00.178377] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:13.257 [2024-07-15 22:01:00.178483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63368 ] 00:06:13.515 [2024-07-15 22:01:00.314618] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.515 [2024-07-15 22:01:00.314667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.515 [2024-07-15 22:01:00.377166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.515 [2024-07-15 22:01:00.377236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.515 [2024-07-15 22:01:00.377240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.446 22:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.446 22:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:14.446 22:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63398 00:06:14.446 22:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63398 /var/tmp/spdk2.sock 00:06:14.446 22:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:14.446 22:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63398 ']' 00:06:14.446 22:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.446 22:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.446 22:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.446 22:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.446 22:01:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.446 [2024-07-15 22:01:01.251910] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:14.446 [2024-07-15 22:01:01.252019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63398 ] 00:06:14.703 [2024-07-15 22:01:01.401395] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.703 [2024-07-15 22:01:01.401451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.703 [2024-07-15 22:01:01.523551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.703 [2024-07-15 22:01:01.527204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:14.703 [2024-07-15 22:01:01.527205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.653 [2024-07-15 22:01:02.251226] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63368 has claimed it. 00:06:15.653 2024/07/15 22:01:02 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:15.653 request: 00:06:15.653 { 00:06:15.653 "method": "framework_enable_cpumask_locks", 00:06:15.653 "params": {} 00:06:15.653 } 00:06:15.653 Got JSON-RPC error response 00:06:15.653 GoRPCClient: error on JSON-RPC call 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63368 /var/tmp/spdk.sock 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63368 ']' 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.653 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.654 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.654 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.654 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.654 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:15.654 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63398 /var/tmp/spdk2.sock 00:06:15.654 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63398 ']' 00:06:15.654 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.654 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.654 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.654 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.654 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.911 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.911 ************************************ 00:06:15.911 END TEST locking_overlapped_coremask_via_rpc 00:06:15.911 ************************************ 00:06:15.911 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:15.911 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:15.911 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.911 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.911 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.911 00:06:15.911 real 0m2.728s 00:06:15.911 user 0m1.470s 00:06:15.911 sys 0m0.186s 00:06:15.911 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.911 22:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.169 22:01:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:16.169 22:01:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:16.169 22:01:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63368 ]] 00:06:16.169 22:01:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63368 00:06:16.169 22:01:02 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63368 ']' 00:06:16.169 22:01:02 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63368 00:06:16.169 22:01:02 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:16.169 22:01:02 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.169 22:01:02 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63368 00:06:16.169 killing process with pid 63368 00:06:16.169 22:01:02 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.169 22:01:02 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.169 22:01:02 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63368' 00:06:16.169 22:01:02 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63368 00:06:16.169 22:01:02 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63368 00:06:16.426 22:01:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63398 ]] 00:06:16.426 22:01:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63398 00:06:16.426 22:01:03 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63398 ']' 00:06:16.426 22:01:03 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63398 00:06:16.426 22:01:03 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:16.426 22:01:03 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.426 22:01:03 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63398 00:06:16.426 killing process with pid 63398 00:06:16.426 22:01:03 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:16.426 22:01:03 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:16.426 22:01:03 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63398' 00:06:16.426 22:01:03 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63398 00:06:16.426 22:01:03 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63398 00:06:16.689 22:01:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:16.689 22:01:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:16.690 22:01:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63368 ]] 00:06:16.690 22:01:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63368 00:06:16.690 22:01:03 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63368 ']' 00:06:16.690 22:01:03 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63368 00:06:16.690 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63368) - No such process 00:06:16.690 Process with pid 63368 is not found 00:06:16.690 22:01:03 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63368 is not found' 00:06:16.690 22:01:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63398 ]] 00:06:16.690 22:01:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63398 00:06:16.690 22:01:03 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63398 ']' 00:06:16.690 22:01:03 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63398 00:06:16.690 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63398) - No such process 00:06:16.690 Process with pid 63398 is not found 00:06:16.690 22:01:03 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63398 is not found' 00:06:16.690 22:01:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:16.690 00:06:16.690 real 0m18.858s 00:06:16.690 user 0m35.183s 00:06:16.690 sys 0m4.470s 00:06:16.690 ************************************ 00:06:16.690 END TEST cpu_locks 00:06:16.690 ************************************ 00:06:16.690 22:01:03 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.690 22:01:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.690 22:01:03 event -- common/autotest_common.sh@1142 -- # return 0 00:06:16.690 00:06:16.690 real 0m46.069s 00:06:16.690 user 1m31.044s 00:06:16.690 sys 0m8.060s 00:06:16.690 22:01:03 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.690 22:01:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.690 ************************************ 00:06:16.690 END TEST event 00:06:16.690 ************************************ 00:06:16.690 22:01:03 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.690 22:01:03 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:16.690 22:01:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.690 22:01:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.690 22:01:03 -- common/autotest_common.sh@10 -- # set +x 00:06:16.690 ************************************ 00:06:16.690 START TEST thread 00:06:16.690 ************************************ 00:06:16.690 22:01:03 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:16.690 * Looking for test storage... 00:06:16.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:16.690 22:01:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.690 22:01:03 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:16.690 22:01:03 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.690 22:01:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.690 ************************************ 00:06:16.690 START TEST thread_poller_perf 00:06:16.690 ************************************ 00:06:16.690 22:01:03 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.947 [2024-07-15 22:01:03.648648] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:16.947 [2024-07-15 22:01:03.648742] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63545 ] 00:06:16.947 [2024-07-15 22:01:03.783471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.947 [2024-07-15 22:01:03.855225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.947 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:18.318 ====================================== 00:06:18.318 busy:2208477709 (cyc) 00:06:18.318 total_run_count: 285000 00:06:18.318 tsc_hz: 2200000000 (cyc) 00:06:18.318 ====================================== 00:06:18.318 poller_cost: 7749 (cyc), 3522 (nsec) 00:06:18.318 00:06:18.318 real 0m1.317s 00:06:18.318 user 0m1.167s 00:06:18.318 sys 0m0.042s 00:06:18.318 22:01:04 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.318 ************************************ 00:06:18.318 END TEST thread_poller_perf 00:06:18.318 ************************************ 00:06:18.318 22:01:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.318 22:01:04 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:18.318 22:01:04 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:18.318 22:01:04 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:18.318 22:01:04 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.318 22:01:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.318 ************************************ 00:06:18.318 START TEST thread_poller_perf 00:06:18.318 ************************************ 00:06:18.318 22:01:04 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:18.318 [2024-07-15 22:01:05.015617] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:18.318 [2024-07-15 22:01:05.015725] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63580 ] 00:06:18.318 [2024-07-15 22:01:05.153191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.318 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:18.318 [2024-07-15 22:01:05.217447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.691 ====================================== 00:06:19.691 busy:2201775455 (cyc) 00:06:19.691 total_run_count: 3888000 00:06:19.691 tsc_hz: 2200000000 (cyc) 00:06:19.691 ====================================== 00:06:19.691 poller_cost: 566 (cyc), 257 (nsec) 00:06:19.691 00:06:19.691 real 0m1.292s 00:06:19.691 user 0m1.134s 00:06:19.691 sys 0m0.050s 00:06:19.691 22:01:06 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.691 ************************************ 00:06:19.691 END TEST thread_poller_perf 00:06:19.691 22:01:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.691 ************************************ 00:06:19.691 22:01:06 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:19.691 22:01:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:19.691 00:06:19.691 real 0m2.780s 00:06:19.691 user 0m2.364s 00:06:19.691 sys 0m0.199s 00:06:19.691 22:01:06 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.691 ************************************ 00:06:19.691 22:01:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.691 END TEST thread 00:06:19.691 ************************************ 00:06:19.691 22:01:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.691 22:01:06 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:19.691 22:01:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.691 22:01:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.691 22:01:06 -- common/autotest_common.sh@10 -- # set +x 00:06:19.691 ************************************ 00:06:19.691 START TEST accel 00:06:19.691 ************************************ 00:06:19.691 22:01:06 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:19.691 * Looking for test storage... 00:06:19.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:19.691 22:01:06 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:19.691 22:01:06 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:19.691 22:01:06 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:19.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.691 22:01:06 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63655 00:06:19.691 22:01:06 accel -- accel/accel.sh@63 -- # waitforlisten 63655 00:06:19.691 22:01:06 accel -- common/autotest_common.sh@829 -- # '[' -z 63655 ']' 00:06:19.691 22:01:06 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.691 22:01:06 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.691 22:01:06 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.691 22:01:06 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.691 22:01:06 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:19.691 22:01:06 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:19.691 22:01:06 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.691 22:01:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.691 22:01:06 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.691 22:01:06 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.691 22:01:06 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.691 22:01:06 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.691 22:01:06 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:19.691 22:01:06 accel -- accel/accel.sh@41 -- # jq -r . 00:06:19.691 [2024-07-15 22:01:06.515309] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:19.691 [2024-07-15 22:01:06.515395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63655 ] 00:06:19.949 [2024-07-15 22:01:06.645728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.949 [2024-07-15 22:01:06.706151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@862 -- # return 0 00:06:20.885 22:01:07 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:20.885 22:01:07 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:20.885 22:01:07 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:20.885 22:01:07 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:20.885 22:01:07 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:20.885 22:01:07 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:20.885 22:01:07 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # IFS== 00:06:20.885 22:01:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:20.885 22:01:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:20.885 22:01:07 accel -- accel/accel.sh@75 -- # killprocess 63655 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@948 -- # '[' -z 63655 ']' 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@952 -- # kill -0 63655 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@953 -- # uname 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63655 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.885 killing process with pid 63655 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63655' 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@967 -- # kill 63655 00:06:20.885 22:01:07 accel -- common/autotest_common.sh@972 -- # wait 63655 00:06:21.144 22:01:07 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:21.144 22:01:07 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:21.144 22:01:07 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:21.144 22:01:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.144 22:01:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.144 22:01:07 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:21.144 22:01:07 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:21.144 22:01:07 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:21.144 22:01:07 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.144 22:01:07 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.144 22:01:07 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.144 22:01:07 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.144 22:01:07 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.144 22:01:07 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:21.144 22:01:07 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:21.144 22:01:07 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.144 22:01:07 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:21.144 22:01:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.144 22:01:07 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:21.144 22:01:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:21.144 22:01:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.144 22:01:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.144 ************************************ 00:06:21.144 START TEST accel_missing_filename 00:06:21.144 ************************************ 00:06:21.144 22:01:07 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:21.144 22:01:07 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:21.144 22:01:07 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:21.144 22:01:07 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:21.144 22:01:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.144 22:01:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:21.144 22:01:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.144 22:01:07 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:21.144 22:01:07 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:21.144 22:01:07 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:21.144 22:01:07 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.144 22:01:07 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.144 22:01:07 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.144 22:01:07 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.144 22:01:07 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.144 22:01:07 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:21.144 22:01:07 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:21.144 [2024-07-15 22:01:07.994367] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:21.144 [2024-07-15 22:01:07.994466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63724 ] 00:06:21.401 [2024-07-15 22:01:08.131112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.401 [2024-07-15 22:01:08.190887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.401 [2024-07-15 22:01:08.222217] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.401 [2024-07-15 22:01:08.262873] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:21.401 A filename is required. 00:06:21.401 22:01:08 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:21.658 22:01:08 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:21.658 22:01:08 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:21.658 22:01:08 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:21.658 22:01:08 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:21.658 22:01:08 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:21.658 00:06:21.658 real 0m0.378s 00:06:21.658 user 0m0.248s 00:06:21.658 sys 0m0.075s 00:06:21.658 22:01:08 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.658 ************************************ 00:06:21.658 END TEST accel_missing_filename 00:06:21.658 22:01:08 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:21.658 ************************************ 00:06:21.658 22:01:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.658 22:01:08 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.658 22:01:08 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:21.658 22:01:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.658 22:01:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.658 ************************************ 00:06:21.658 START TEST accel_compress_verify 00:06:21.658 ************************************ 00:06:21.658 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.658 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:21.658 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.658 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:21.658 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.658 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:21.658 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.658 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.658 22:01:08 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:21.658 22:01:08 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:21.658 22:01:08 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.658 22:01:08 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.658 22:01:08 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.658 22:01:08 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.658 22:01:08 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.658 22:01:08 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:21.658 22:01:08 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:21.658 [2024-07-15 22:01:08.415892] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:21.658 [2024-07-15 22:01:08.415979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63749 ] 00:06:21.658 [2024-07-15 22:01:08.551689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.916 [2024-07-15 22:01:08.613690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.916 [2024-07-15 22:01:08.646580] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.916 [2024-07-15 22:01:08.692153] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:21.916 00:06:21.916 Compression does not support the verify option, aborting. 00:06:21.916 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:21.916 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:21.916 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:21.916 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:21.916 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:21.916 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:21.916 00:06:21.916 real 0m0.380s 00:06:21.916 user 0m0.255s 00:06:21.916 sys 0m0.071s 00:06:21.916 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.916 22:01:08 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:21.916 ************************************ 00:06:21.916 END TEST accel_compress_verify 00:06:21.916 ************************************ 00:06:21.916 22:01:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.916 22:01:08 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:21.916 22:01:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:21.916 22:01:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.916 22:01:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.916 ************************************ 00:06:21.916 START TEST accel_wrong_workload 00:06:21.916 ************************************ 00:06:21.916 22:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:21.916 22:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:21.916 22:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:21.916 22:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:21.916 22:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.916 22:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:21.916 22:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.916 22:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:21.916 22:01:08 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:21.916 22:01:08 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:21.916 22:01:08 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.916 22:01:08 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.916 22:01:08 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.916 22:01:08 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.916 22:01:08 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.916 22:01:08 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:21.916 22:01:08 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:21.916 Unsupported workload type: foobar 00:06:21.916 [2024-07-15 22:01:08.848699] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:21.916 accel_perf options: 00:06:21.916 [-h help message] 00:06:21.916 [-q queue depth per core] 00:06:21.916 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:21.916 [-T number of threads per core 00:06:21.916 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:21.916 [-t time in seconds] 00:06:21.916 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:21.916 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:21.916 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:21.916 [-l for compress/decompress workloads, name of uncompressed input file 00:06:21.916 [-S for crc32c workload, use this seed value (default 0) 00:06:21.916 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:21.916 [-f for fill workload, use this BYTE value (default 255) 00:06:21.916 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:21.916 [-y verify result if this switch is on] 00:06:21.916 [-a tasks to allocate per core (default: same value as -q)] 00:06:21.916 Can be used to spread operations across a wider range of memory. 00:06:21.916 22:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:21.916 22:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:21.916 22:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:21.916 22:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:21.916 00:06:21.916 real 0m0.030s 00:06:21.916 user 0m0.016s 00:06:21.916 sys 0m0.013s 00:06:21.916 22:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.916 22:01:08 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:21.916 ************************************ 00:06:21.916 END TEST accel_wrong_workload 00:06:21.916 ************************************ 00:06:22.175 22:01:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.175 22:01:08 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:22.175 22:01:08 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:22.175 22:01:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.175 22:01:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.175 ************************************ 00:06:22.175 START TEST accel_negative_buffers 00:06:22.175 ************************************ 00:06:22.175 22:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:22.175 22:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:22.175 22:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:22.175 22:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:22.175 22:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.175 22:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:22.175 22:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.175 22:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:22.175 22:01:08 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:22.175 22:01:08 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:22.175 22:01:08 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.175 22:01:08 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.175 22:01:08 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.175 22:01:08 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.175 22:01:08 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.175 22:01:08 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:22.175 22:01:08 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:22.175 -x option must be non-negative. 00:06:22.175 [2024-07-15 22:01:08.921406] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:22.175 accel_perf options: 00:06:22.175 [-h help message] 00:06:22.175 [-q queue depth per core] 00:06:22.175 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:22.175 [-T number of threads per core 00:06:22.175 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:22.175 [-t time in seconds] 00:06:22.175 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:22.175 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:22.175 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:22.175 [-l for compress/decompress workloads, name of uncompressed input file 00:06:22.175 [-S for crc32c workload, use this seed value (default 0) 00:06:22.175 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:22.175 [-f for fill workload, use this BYTE value (default 255) 00:06:22.175 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:22.175 [-y verify result if this switch is on] 00:06:22.175 [-a tasks to allocate per core (default: same value as -q)] 00:06:22.175 Can be used to spread operations across a wider range of memory. 00:06:22.175 22:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:22.175 22:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.175 22:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.175 22:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.175 00:06:22.175 real 0m0.031s 00:06:22.175 user 0m0.018s 00:06:22.175 sys 0m0.012s 00:06:22.175 22:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.175 22:01:08 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:22.175 ************************************ 00:06:22.175 END TEST accel_negative_buffers 00:06:22.175 ************************************ 00:06:22.175 22:01:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.175 22:01:08 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:22.175 22:01:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:22.175 22:01:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.175 22:01:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.175 ************************************ 00:06:22.175 START TEST accel_crc32c 00:06:22.175 ************************************ 00:06:22.175 22:01:08 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:22.175 22:01:08 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:22.175 22:01:08 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:22.175 22:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.175 22:01:08 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:22.175 22:01:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.175 22:01:08 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:22.175 22:01:08 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:22.175 22:01:08 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.175 22:01:08 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.175 22:01:08 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.175 22:01:08 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.175 22:01:08 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.175 22:01:08 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:22.175 22:01:08 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:22.175 [2024-07-15 22:01:08.988243] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:22.175 [2024-07-15 22:01:08.988332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63806 ] 00:06:22.434 [2024-07-15 22:01:09.127489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.434 [2024-07-15 22:01:09.197313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.434 22:01:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.810 22:01:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.810 22:01:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.810 22:01:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.810 22:01:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.810 22:01:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.810 22:01:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.810 22:01:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.810 22:01:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.810 22:01:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.810 22:01:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.810 22:01:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.810 22:01:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.810 22:01:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.811 22:01:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.811 22:01:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:23.811 22:01:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.811 22:01:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:23.811 22:01:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.811 00:06:23.811 real 0m1.380s 00:06:23.811 user 0m0.017s 00:06:23.811 sys 0m0.001s 00:06:23.811 22:01:10 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.811 ************************************ 00:06:23.811 END TEST accel_crc32c 00:06:23.811 ************************************ 00:06:23.811 22:01:10 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:23.811 22:01:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.811 22:01:10 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:23.811 22:01:10 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:23.811 22:01:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.811 22:01:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.811 ************************************ 00:06:23.811 START TEST accel_crc32c_C2 00:06:23.811 ************************************ 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:23.811 [2024-07-15 22:01:10.412331] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:23.811 [2024-07-15 22:01:10.412423] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63839 ] 00:06:23.811 [2024-07-15 22:01:10.546632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.811 [2024-07-15 22:01:10.607921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:23.811 22:01:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.181 00:06:25.181 real 0m1.379s 00:06:25.181 user 0m1.213s 00:06:25.181 sys 0m0.069s 00:06:25.181 ************************************ 00:06:25.181 END TEST accel_crc32c_C2 00:06:25.181 ************************************ 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.181 22:01:11 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:25.181 22:01:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.181 22:01:11 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:25.181 22:01:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:25.181 22:01:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.181 22:01:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.181 ************************************ 00:06:25.181 START TEST accel_copy 00:06:25.181 ************************************ 00:06:25.181 22:01:11 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:25.181 22:01:11 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:25.181 22:01:11 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:25.181 22:01:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.181 22:01:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.181 22:01:11 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:25.181 22:01:11 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:25.181 22:01:11 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:25.181 22:01:11 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.181 22:01:11 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.181 22:01:11 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.181 22:01:11 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.181 22:01:11 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.182 22:01:11 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:25.182 22:01:11 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:25.182 [2024-07-15 22:01:11.849641] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:25.182 [2024-07-15 22:01:11.849755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63871 ] 00:06:25.182 [2024-07-15 22:01:11.985847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.182 [2024-07-15 22:01:12.057774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:25.182 22:01:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:26.557 22:01:13 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.557 00:06:26.557 real 0m1.400s 00:06:26.557 user 0m1.226s 00:06:26.557 sys 0m0.081s 00:06:26.557 ************************************ 00:06:26.557 END TEST accel_copy 00:06:26.557 ************************************ 00:06:26.557 22:01:13 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.557 22:01:13 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:26.557 22:01:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.557 22:01:13 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.557 22:01:13 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:26.557 22:01:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.557 22:01:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.557 ************************************ 00:06:26.557 START TEST accel_fill 00:06:26.557 ************************************ 00:06:26.557 22:01:13 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.557 22:01:13 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:26.557 22:01:13 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:26.557 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.557 22:01:13 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.557 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.557 22:01:13 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.557 22:01:13 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:26.557 22:01:13 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.557 22:01:13 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.557 22:01:13 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.557 22:01:13 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.557 22:01:13 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.557 22:01:13 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:26.557 22:01:13 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:26.557 [2024-07-15 22:01:13.296174] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:26.557 [2024-07-15 22:01:13.296253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63905 ] 00:06:26.557 [2024-07-15 22:01:13.429199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.557 [2024-07-15 22:01:13.489454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.815 22:01:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:27.747 22:01:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.747 00:06:27.747 real 0m1.360s 00:06:27.747 user 0m0.013s 00:06:27.747 sys 0m0.002s 00:06:27.747 22:01:14 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.747 22:01:14 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:27.747 ************************************ 00:06:27.747 END TEST accel_fill 00:06:27.747 ************************************ 00:06:27.747 22:01:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.747 22:01:14 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:27.747 22:01:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:27.747 22:01:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.747 22:01:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.747 ************************************ 00:06:27.747 START TEST accel_copy_crc32c 00:06:27.747 ************************************ 00:06:27.747 22:01:14 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:27.747 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:27.748 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:27.748 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:27.748 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:27.748 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:27.748 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:27.748 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:27.748 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.748 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.748 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.748 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.748 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.748 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:27.748 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:28.009 [2024-07-15 22:01:14.708138] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:28.009 [2024-07-15 22:01:14.708244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63942 ] 00:06:28.009 [2024-07-15 22:01:14.841667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.009 [2024-07-15 22:01:14.910075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.009 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.009 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.009 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.009 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.009 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.009 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.010 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.011 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:28.011 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:28.011 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.011 22:01:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.386 00:06:29.386 real 0m1.370s 00:06:29.386 user 0m1.201s 00:06:29.386 sys 0m0.077s 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.386 22:01:16 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:29.386 ************************************ 00:06:29.386 END TEST accel_copy_crc32c 00:06:29.386 ************************************ 00:06:29.386 22:01:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.386 22:01:16 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:29.386 22:01:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:29.386 22:01:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.386 22:01:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.386 ************************************ 00:06:29.386 START TEST accel_copy_crc32c_C2 00:06:29.386 ************************************ 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:29.386 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:29.386 [2024-07-15 22:01:16.132390] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:29.386 [2024-07-15 22:01:16.132533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63971 ] 00:06:29.386 [2024-07-15 22:01:16.278673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.644 [2024-07-15 22:01:16.336502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:29.644 22:01:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.574 00:06:30.574 real 0m1.382s 00:06:30.574 user 0m1.201s 00:06:30.574 sys 0m0.082s 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.574 ************************************ 00:06:30.574 END TEST accel_copy_crc32c_C2 00:06:30.574 22:01:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:30.574 ************************************ 00:06:30.833 22:01:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.833 22:01:17 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:30.833 22:01:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:30.833 22:01:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.833 22:01:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.833 ************************************ 00:06:30.833 START TEST accel_dualcast 00:06:30.833 ************************************ 00:06:30.833 22:01:17 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:30.833 22:01:17 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:30.833 22:01:17 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:30.833 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:30.833 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:30.833 22:01:17 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:30.833 22:01:17 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:30.833 22:01:17 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:30.833 22:01:17 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.833 22:01:17 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.833 22:01:17 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.833 22:01:17 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.833 22:01:17 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.833 22:01:17 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:30.833 22:01:17 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:30.833 [2024-07-15 22:01:17.570478] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:30.833 [2024-07-15 22:01:17.570645] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64010 ] 00:06:30.833 [2024-07-15 22:01:17.704589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.833 [2024-07-15 22:01:17.766664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:31.092 22:01:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:32.027 22:01:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.027 00:06:32.027 real 0m1.376s 00:06:32.027 user 0m1.205s 00:06:32.027 sys 0m0.077s 00:06:32.027 22:01:18 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.027 22:01:18 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:32.027 ************************************ 00:06:32.027 END TEST accel_dualcast 00:06:32.027 ************************************ 00:06:32.027 22:01:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.027 22:01:18 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:32.027 22:01:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:32.027 22:01:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.027 22:01:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.027 ************************************ 00:06:32.027 START TEST accel_compare 00:06:32.027 ************************************ 00:06:32.027 22:01:18 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:32.027 22:01:18 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:32.027 22:01:18 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:32.027 22:01:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.027 22:01:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.027 22:01:18 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:32.027 22:01:18 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:32.027 22:01:18 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:32.027 22:01:18 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.027 22:01:18 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.027 22:01:18 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.027 22:01:18 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.027 22:01:18 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.027 22:01:18 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:32.027 22:01:18 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:32.285 [2024-07-15 22:01:18.984554] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:32.285 [2024-07-15 22:01:18.984650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64040 ] 00:06:32.285 [2024-07-15 22:01:19.123593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.285 [2024-07-15 22:01:19.192736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.285 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:32.543 22:01:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.482 22:01:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.482 22:01:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.482 22:01:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.482 22:01:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.482 22:01:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.482 22:01:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.482 22:01:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.482 22:01:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.482 22:01:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.482 22:01:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.482 22:01:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:33.483 22:01:20 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.483 00:06:33.483 real 0m1.378s 00:06:33.483 user 0m1.218s 00:06:33.483 sys 0m0.067s 00:06:33.483 22:01:20 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.483 ************************************ 00:06:33.483 END TEST accel_compare 00:06:33.483 ************************************ 00:06:33.483 22:01:20 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:33.483 22:01:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.483 22:01:20 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:33.483 22:01:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:33.483 22:01:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.483 22:01:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.483 ************************************ 00:06:33.483 START TEST accel_xor 00:06:33.483 ************************************ 00:06:33.483 22:01:20 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:33.483 22:01:20 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:33.483 22:01:20 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:33.483 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.483 22:01:20 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:33.483 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.483 22:01:20 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:33.483 22:01:20 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:33.483 22:01:20 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.483 22:01:20 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.483 22:01:20 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.483 22:01:20 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.483 22:01:20 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.483 22:01:20 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:33.483 22:01:20 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:33.483 [2024-07-15 22:01:20.405530] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:33.483 [2024-07-15 22:01:20.405615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64069 ] 00:06:33.756 [2024-07-15 22:01:20.540398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.756 [2024-07-15 22:01:20.615214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.756 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.757 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.757 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:33.757 22:01:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:33.757 22:01:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:33.757 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:33.757 22:01:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.130 00:06:35.130 real 0m1.389s 00:06:35.130 user 0m1.213s 00:06:35.130 sys 0m0.082s 00:06:35.130 22:01:21 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.130 22:01:21 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:35.130 ************************************ 00:06:35.130 END TEST accel_xor 00:06:35.130 ************************************ 00:06:35.130 22:01:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.130 22:01:21 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:35.130 22:01:21 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:35.130 22:01:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.130 22:01:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.130 ************************************ 00:06:35.130 START TEST accel_xor 00:06:35.130 ************************************ 00:06:35.130 22:01:21 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:35.130 22:01:21 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:35.130 [2024-07-15 22:01:21.844824] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:35.130 [2024-07-15 22:01:21.844918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64109 ] 00:06:35.130 [2024-07-15 22:01:21.982673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.130 [2024-07-15 22:01:22.054966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:35.387 22:01:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:36.320 22:01:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.320 00:06:36.320 real 0m1.388s 00:06:36.320 user 0m1.208s 00:06:36.320 sys 0m0.083s 00:06:36.320 22:01:23 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.320 ************************************ 00:06:36.320 END TEST accel_xor 00:06:36.320 ************************************ 00:06:36.320 22:01:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:36.320 22:01:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.320 22:01:23 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:36.320 22:01:23 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:36.320 22:01:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.320 22:01:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.320 ************************************ 00:06:36.320 START TEST accel_dif_verify 00:06:36.320 ************************************ 00:06:36.320 22:01:23 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:36.320 22:01:23 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:36.320 22:01:23 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:36.320 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.320 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.320 22:01:23 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:36.320 22:01:23 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:36.320 22:01:23 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:36.320 22:01:23 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.320 22:01:23 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.320 22:01:23 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.320 22:01:23 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.320 22:01:23 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.320 22:01:23 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:36.320 22:01:23 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:36.578 [2024-07-15 22:01:23.273917] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:36.578 [2024-07-15 22:01:23.274028] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64138 ] 00:06:36.578 [2024-07-15 22:01:23.413033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.578 [2024-07-15 22:01:23.484391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.578 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:36.836 22:01:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:37.768 22:01:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.768 00:06:37.768 real 0m1.389s 00:06:37.768 user 0m1.218s 00:06:37.768 sys 0m0.076s 00:06:37.768 22:01:24 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.768 22:01:24 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:37.768 ************************************ 00:06:37.768 END TEST accel_dif_verify 00:06:37.768 ************************************ 00:06:37.768 22:01:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.768 22:01:24 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:37.768 22:01:24 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:37.768 22:01:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.768 22:01:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.768 ************************************ 00:06:37.768 START TEST accel_dif_generate 00:06:37.768 ************************************ 00:06:37.768 22:01:24 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:37.768 22:01:24 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:37.768 22:01:24 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:37.768 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:37.768 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:37.768 22:01:24 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:37.768 22:01:24 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:37.768 22:01:24 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:37.768 22:01:24 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.768 22:01:24 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.768 22:01:24 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.768 22:01:24 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.768 22:01:24 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.768 22:01:24 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:37.768 22:01:24 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:37.768 [2024-07-15 22:01:24.711474] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:37.768 [2024-07-15 22:01:24.711596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64174 ] 00:06:38.025 [2024-07-15 22:01:24.849873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.025 [2024-07-15 22:01:24.943406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.284 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.285 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.285 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.285 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.285 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.285 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:38.285 22:01:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:38.285 22:01:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:38.285 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:38.285 22:01:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:39.222 22:01:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.222 00:06:39.222 real 0m1.412s 00:06:39.222 user 0m1.232s 00:06:39.222 sys 0m0.088s 00:06:39.222 22:01:26 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.222 ************************************ 00:06:39.222 END TEST accel_dif_generate 00:06:39.222 ************************************ 00:06:39.222 22:01:26 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:39.222 22:01:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.222 22:01:26 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:39.222 22:01:26 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:39.222 22:01:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.222 22:01:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.222 ************************************ 00:06:39.222 START TEST accel_dif_generate_copy 00:06:39.222 ************************************ 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:39.222 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:39.499 [2024-07-15 22:01:26.174727] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:39.499 [2024-07-15 22:01:26.174832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64209 ] 00:06:39.499 [2024-07-15 22:01:26.344204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.499 [2024-07-15 22:01:26.443374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.758 22:01:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.690 00:06:40.690 real 0m1.449s 00:06:40.690 user 0m0.013s 00:06:40.690 sys 0m0.002s 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.690 ************************************ 00:06:40.690 END TEST accel_dif_generate_copy 00:06:40.690 ************************************ 00:06:40.690 22:01:27 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:40.948 22:01:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.948 22:01:27 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:40.948 22:01:27 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.948 22:01:27 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:40.948 22:01:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.948 22:01:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.948 ************************************ 00:06:40.948 START TEST accel_comp 00:06:40.948 ************************************ 00:06:40.948 22:01:27 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.948 22:01:27 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:40.948 22:01:27 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:40.948 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:40.948 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:40.948 22:01:27 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.948 22:01:27 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:40.948 22:01:27 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:40.948 22:01:27 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.948 22:01:27 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.948 22:01:27 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.948 22:01:27 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.948 22:01:27 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.948 22:01:27 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:40.948 22:01:27 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:40.948 [2024-07-15 22:01:27.673507] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:40.948 [2024-07-15 22:01:27.673605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64238 ] 00:06:40.948 [2024-07-15 22:01:27.810620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.948 [2024-07-15 22:01:27.881548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.205 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:41.206 22:01:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:42.138 22:01:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.138 00:06:42.138 real 0m1.395s 00:06:42.138 user 0m1.211s 00:06:42.138 sys 0m0.089s 00:06:42.138 22:01:29 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.138 ************************************ 00:06:42.138 END TEST accel_comp 00:06:42.138 ************************************ 00:06:42.138 22:01:29 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:42.138 22:01:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.138 22:01:29 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:42.138 22:01:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:42.138 22:01:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.138 22:01:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.396 ************************************ 00:06:42.396 START TEST accel_decomp 00:06:42.396 ************************************ 00:06:42.396 22:01:29 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:42.396 22:01:29 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:42.396 22:01:29 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:42.397 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.397 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.397 22:01:29 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:42.397 22:01:29 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:42.397 22:01:29 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:42.397 22:01:29 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.397 22:01:29 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.397 22:01:29 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.397 22:01:29 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.397 22:01:29 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.397 22:01:29 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:42.397 22:01:29 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:42.397 [2024-07-15 22:01:29.118017] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:42.397 [2024-07-15 22:01:29.118147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64278 ] 00:06:42.397 [2024-07-15 22:01:29.261821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.397 [2024-07-15 22:01:29.334632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:42.655 22:01:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.590 22:01:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.590 00:06:43.590 real 0m1.398s 00:06:43.590 user 0m1.221s 00:06:43.590 sys 0m0.081s 00:06:43.590 22:01:30 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.590 ************************************ 00:06:43.590 END TEST accel_decomp 00:06:43.590 ************************************ 00:06:43.590 22:01:30 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:43.590 22:01:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.590 22:01:30 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:43.590 22:01:30 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:43.590 22:01:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.590 22:01:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.590 ************************************ 00:06:43.590 START TEST accel_decomp_full 00:06:43.590 ************************************ 00:06:43.590 22:01:30 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:43.590 22:01:30 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:43.590 22:01:30 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:43.590 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.590 22:01:30 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:43.590 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:43.849 [2024-07-15 22:01:30.557499] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:43.849 [2024-07-15 22:01:30.557587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64307 ] 00:06:43.849 [2024-07-15 22:01:30.691466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.849 [2024-07-15 22:01:30.752252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:43.849 22:01:30 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:44.108 22:01:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:45.044 ************************************ 00:06:45.044 END TEST accel_decomp_full 00:06:45.044 ************************************ 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:45.044 22:01:31 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.044 00:06:45.044 real 0m1.388s 00:06:45.044 user 0m1.216s 00:06:45.044 sys 0m0.077s 00:06:45.044 22:01:31 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.044 22:01:31 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:45.044 22:01:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.044 22:01:31 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:45.044 22:01:31 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:45.044 22:01:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.044 22:01:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.044 ************************************ 00:06:45.044 START TEST accel_decomp_mcore 00:06:45.044 ************************************ 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:45.044 22:01:31 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:45.303 [2024-07-15 22:01:31.999099] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:45.303 [2024-07-15 22:01:31.999189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64342 ] 00:06:45.303 [2024-07-15 22:01:32.129778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.303 [2024-07-15 22:01:32.214110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.303 [2024-07-15 22:01:32.214236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.303 [2024-07-15 22:01:32.214302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.303 [2024-07-15 22:01:32.214627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:45.562 22:01:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.562 ************************************ 00:06:46.562 END TEST accel_decomp_mcore 00:06:46.562 ************************************ 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.562 00:06:46.562 real 0m1.440s 00:06:46.562 user 0m4.569s 00:06:46.562 sys 0m0.083s 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.562 22:01:33 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:46.562 22:01:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.562 22:01:33 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.562 22:01:33 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:46.562 22:01:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.562 22:01:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.562 ************************************ 00:06:46.562 START TEST accel_decomp_full_mcore 00:06:46.562 ************************************ 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:46.562 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:46.562 [2024-07-15 22:01:33.475246] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:46.562 [2024-07-15 22:01:33.475392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64379 ] 00:06:46.820 [2024-07-15 22:01:33.610190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.820 [2024-07-15 22:01:33.679382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.820 [2024-07-15 22:01:33.679451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.820 [2024-07-15 22:01:33.679524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.820 [2024-07-15 22:01:33.679531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.820 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:46.821 22:01:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.195 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.196 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.196 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.196 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.196 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.196 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.196 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:48.196 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:48.196 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:48.196 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:48.196 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.196 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:48.196 22:01:34 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.196 00:06:48.196 real 0m1.413s 00:06:48.196 user 0m4.502s 00:06:48.196 sys 0m0.096s 00:06:48.196 22:01:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.196 ************************************ 00:06:48.196 END TEST accel_decomp_full_mcore 00:06:48.196 ************************************ 00:06:48.196 22:01:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:48.196 22:01:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.196 22:01:34 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:48.196 22:01:34 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:48.196 22:01:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.196 22:01:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.196 ************************************ 00:06:48.196 START TEST accel_decomp_mthread 00:06:48.196 ************************************ 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:48.196 22:01:34 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:48.196 [2024-07-15 22:01:34.932946] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:48.196 [2024-07-15 22:01:34.933037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64417 ] 00:06:48.196 [2024-07-15 22:01:35.066119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.196 [2024-07-15 22:01:35.129316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:48.455 22:01:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.390 00:06:49.390 real 0m1.382s 00:06:49.390 user 0m1.209s 00:06:49.390 sys 0m0.080s 00:06:49.390 ************************************ 00:06:49.390 END TEST accel_decomp_mthread 00:06:49.390 ************************************ 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.390 22:01:36 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:49.390 22:01:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.390 22:01:36 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.390 22:01:36 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:49.390 22:01:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.390 22:01:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.649 ************************************ 00:06:49.649 START TEST accel_decomp_full_mthread 00:06:49.649 ************************************ 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:49.649 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:49.649 [2024-07-15 22:01:36.362970] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:49.649 [2024-07-15 22:01:36.363061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64451 ] 00:06:49.649 [2024-07-15 22:01:36.503804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.649 [2024-07-15 22:01:36.575518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:49.908 22:01:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.845 00:06:50.845 real 0m1.427s 00:06:50.845 user 0m1.259s 00:06:50.845 sys 0m0.076s 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.845 ************************************ 00:06:50.845 END TEST accel_decomp_full_mthread 00:06:50.845 ************************************ 00:06:50.845 22:01:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:51.104 22:01:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.104 22:01:37 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:51.104 22:01:37 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:51.104 22:01:37 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:51.104 22:01:37 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:51.104 22:01:37 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.104 22:01:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.104 22:01:37 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.104 22:01:37 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.104 22:01:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.104 22:01:37 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.104 22:01:37 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.104 22:01:37 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:51.104 22:01:37 accel -- accel/accel.sh@41 -- # jq -r . 00:06:51.104 ************************************ 00:06:51.104 START TEST accel_dif_functional_tests 00:06:51.104 ************************************ 00:06:51.104 22:01:37 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:51.104 [2024-07-15 22:01:37.881702] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:51.104 [2024-07-15 22:01:37.881821] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64481 ] 00:06:51.104 [2024-07-15 22:01:38.024352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.362 [2024-07-15 22:01:38.103103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.362 [2024-07-15 22:01:38.103249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.362 [2024-07-15 22:01:38.103279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.362 00:06:51.362 00:06:51.362 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.362 http://cunit.sourceforge.net/ 00:06:51.362 00:06:51.362 00:06:51.362 Suite: accel_dif 00:06:51.362 Test: verify: DIF generated, GUARD check ...passed 00:06:51.362 Test: verify: DIF generated, APPTAG check ...passed 00:06:51.362 Test: verify: DIF generated, REFTAG check ...passed 00:06:51.362 Test: verify: DIF not generated, GUARD check ...passed 00:06:51.362 Test: verify: DIF not generated, APPTAG check ...passed 00:06:51.362 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 22:01:38.163519] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:51.362 [2024-07-15 22:01:38.163614] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:51.362 passed 00:06:51.362 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:51.362 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 22:01:38.163657] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:51.362 [2024-07-15 22:01:38.163747] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:51.362 passed 00:06:51.362 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:51.362 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:51.362 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:51.362 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:51.362 Test: verify copy: DIF generated, GUARD check ...[2024-07-15 22:01:38.163930] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:51.362 passed 00:06:51.362 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:51.362 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:51.362 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 22:01:38.164145] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:51.362 passed 00:06:51.362 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 22:01:38.164203] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:51.362 passed 00:06:51.362 Test: verify copy: DIF not generated, REFTAG check ...passed 00:06:51.362 Test: generate copy: DIF generated, GUARD check ...passed 00:06:51.362 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:51.362 Test: generate copy: DIF generated, REFTAG check ...[2024-07-15 22:01:38.164274] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:51.362 passed 00:06:51.362 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:51.362 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:51.362 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:51.362 Test: generate copy: iovecs-len validate ...passed 00:06:51.362 Test: generate copy: buffer alignment validate ...passed 00:06:51.362 00:06:51.362 [2024-07-15 22:01:38.164641] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:51.362 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.362 suites 1 1 n/a 0 0 00:06:51.362 tests 26 26 26 0 0 00:06:51.362 asserts 115 115 115 0 n/a 00:06:51.362 00:06:51.362 Elapsed time = 0.003 seconds 00:06:51.621 00:06:51.621 real 0m0.526s 00:06:51.621 user 0m0.625s 00:06:51.621 sys 0m0.110s 00:06:51.621 22:01:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.621 22:01:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:51.621 ************************************ 00:06:51.621 END TEST accel_dif_functional_tests 00:06:51.621 ************************************ 00:06:51.621 22:01:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.621 ************************************ 00:06:51.621 END TEST accel 00:06:51.621 ************************************ 00:06:51.621 00:06:51.621 real 0m32.012s 00:06:51.621 user 0m34.465s 00:06:51.621 sys 0m2.914s 00:06:51.621 22:01:38 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.621 22:01:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.621 22:01:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:51.621 22:01:38 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:51.621 22:01:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.621 22:01:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.621 22:01:38 -- common/autotest_common.sh@10 -- # set +x 00:06:51.621 ************************************ 00:06:51.621 START TEST accel_rpc 00:06:51.621 ************************************ 00:06:51.621 22:01:38 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:51.621 * Looking for test storage... 00:06:51.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:51.621 22:01:38 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:51.621 22:01:38 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64551 00:06:51.621 22:01:38 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64551 00:06:51.621 22:01:38 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64551 ']' 00:06:51.621 22:01:38 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:51.621 22:01:38 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.621 22:01:38 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.621 22:01:38 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.621 22:01:38 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.621 22:01:38 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.879 [2024-07-15 22:01:38.582324] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:51.879 [2024-07-15 22:01:38.582423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64551 ] 00:06:51.879 [2024-07-15 22:01:38.719501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.879 [2024-07-15 22:01:38.791296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.136 22:01:38 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.136 22:01:38 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:52.136 22:01:38 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:52.136 22:01:38 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:52.136 22:01:38 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:52.136 22:01:38 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:52.136 22:01:38 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:52.136 22:01:38 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.136 22:01:38 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.136 22:01:38 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.136 ************************************ 00:06:52.136 START TEST accel_assign_opcode 00:06:52.136 ************************************ 00:06:52.136 22:01:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:52.136 22:01:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:52.136 22:01:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.136 22:01:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:52.137 [2024-07-15 22:01:38.851855] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:52.137 22:01:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.137 22:01:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:52.137 22:01:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.137 22:01:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:52.137 [2024-07-15 22:01:38.859837] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:52.137 22:01:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.137 22:01:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:52.137 22:01:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.137 22:01:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:52.137 22:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.137 22:01:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:52.137 22:01:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:52.137 22:01:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:52.137 22:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.137 22:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:52.137 22:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.137 software 00:06:52.137 00:06:52.137 real 0m0.231s 00:06:52.137 user 0m0.058s 00:06:52.137 sys 0m0.009s 00:06:52.137 ************************************ 00:06:52.137 END TEST accel_assign_opcode 00:06:52.137 ************************************ 00:06:52.137 22:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.137 22:01:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:52.408 22:01:39 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:52.408 22:01:39 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64551 00:06:52.408 22:01:39 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64551 ']' 00:06:52.408 22:01:39 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64551 00:06:52.408 22:01:39 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:52.408 22:01:39 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.408 22:01:39 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64551 00:06:52.408 killing process with pid 64551 00:06:52.408 22:01:39 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.408 22:01:39 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.408 22:01:39 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64551' 00:06:52.408 22:01:39 accel_rpc -- common/autotest_common.sh@967 -- # kill 64551 00:06:52.408 22:01:39 accel_rpc -- common/autotest_common.sh@972 -- # wait 64551 00:06:52.689 ************************************ 00:06:52.689 END TEST accel_rpc 00:06:52.689 ************************************ 00:06:52.689 00:06:52.689 real 0m0.995s 00:06:52.689 user 0m1.004s 00:06:52.689 sys 0m0.316s 00:06:52.689 22:01:39 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.689 22:01:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.689 22:01:39 -- common/autotest_common.sh@1142 -- # return 0 00:06:52.689 22:01:39 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:52.689 22:01:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.689 22:01:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.689 22:01:39 -- common/autotest_common.sh@10 -- # set +x 00:06:52.689 ************************************ 00:06:52.689 START TEST app_cmdline 00:06:52.689 ************************************ 00:06:52.689 22:01:39 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:52.689 * Looking for test storage... 00:06:52.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:52.689 22:01:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:52.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.689 22:01:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64644 00:06:52.689 22:01:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64644 00:06:52.689 22:01:39 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:52.689 22:01:39 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64644 ']' 00:06:52.689 22:01:39 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.689 22:01:39 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.689 22:01:39 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.689 22:01:39 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.689 22:01:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:52.689 [2024-07-15 22:01:39.616696] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:06:52.689 [2024-07-15 22:01:39.617036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64644 ] 00:06:52.946 [2024-07-15 22:01:39.748253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.946 [2024-07-15 22:01:39.809201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.880 22:01:40 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.880 22:01:40 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:53.880 22:01:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:54.138 { 00:06:54.138 "fields": { 00:06:54.138 "commit": "406b3b1b5", 00:06:54.138 "major": 24, 00:06:54.138 "minor": 9, 00:06:54.138 "patch": 0, 00:06:54.139 "suffix": "-pre" 00:06:54.139 }, 00:06:54.139 "version": "SPDK v24.09-pre git sha1 406b3b1b5" 00:06:54.139 } 00:06:54.139 22:01:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:54.139 22:01:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:54.139 22:01:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:54.139 22:01:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:54.139 22:01:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:54.139 22:01:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:54.139 22:01:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:54.139 22:01:40 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.139 22:01:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.139 22:01:40 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.139 22:01:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:54.139 22:01:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:54.139 22:01:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.139 22:01:40 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:54.139 22:01:40 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.139 22:01:40 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.139 22:01:40 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.139 22:01:40 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.139 22:01:40 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.139 22:01:40 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.139 22:01:40 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.139 22:01:40 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.139 22:01:40 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:54.139 22:01:40 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.397 2024/07/15 22:01:41 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:54.397 request: 00:06:54.397 { 00:06:54.397 "method": "env_dpdk_get_mem_stats", 00:06:54.397 "params": {} 00:06:54.397 } 00:06:54.397 Got JSON-RPC error response 00:06:54.397 GoRPCClient: error on JSON-RPC call 00:06:54.397 22:01:41 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:54.397 22:01:41 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.397 22:01:41 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:54.397 22:01:41 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.397 22:01:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64644 00:06:54.397 22:01:41 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64644 ']' 00:06:54.397 22:01:41 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64644 00:06:54.397 22:01:41 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:54.397 22:01:41 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.397 22:01:41 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64644 00:06:54.397 22:01:41 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.397 22:01:41 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.397 22:01:41 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64644' 00:06:54.397 killing process with pid 64644 00:06:54.397 22:01:41 app_cmdline -- common/autotest_common.sh@967 -- # kill 64644 00:06:54.397 22:01:41 app_cmdline -- common/autotest_common.sh@972 -- # wait 64644 00:06:54.656 00:06:54.656 real 0m2.108s 00:06:54.656 user 0m2.813s 00:06:54.656 sys 0m0.387s 00:06:54.656 22:01:41 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.656 22:01:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.656 ************************************ 00:06:54.656 END TEST app_cmdline 00:06:54.656 ************************************ 00:06:54.915 22:01:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:54.915 22:01:41 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:54.915 22:01:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.915 22:01:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.915 22:01:41 -- common/autotest_common.sh@10 -- # set +x 00:06:54.915 ************************************ 00:06:54.915 START TEST version 00:06:54.915 ************************************ 00:06:54.915 22:01:41 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:54.915 * Looking for test storage... 00:06:54.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:54.915 22:01:41 version -- app/version.sh@17 -- # get_header_version major 00:06:54.915 22:01:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:54.915 22:01:41 version -- app/version.sh@14 -- # cut -f2 00:06:54.915 22:01:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:54.915 22:01:41 version -- app/version.sh@17 -- # major=24 00:06:54.915 22:01:41 version -- app/version.sh@18 -- # get_header_version minor 00:06:54.915 22:01:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:54.915 22:01:41 version -- app/version.sh@14 -- # cut -f2 00:06:54.915 22:01:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:54.915 22:01:41 version -- app/version.sh@18 -- # minor=9 00:06:54.915 22:01:41 version -- app/version.sh@19 -- # get_header_version patch 00:06:54.915 22:01:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:54.915 22:01:41 version -- app/version.sh@14 -- # cut -f2 00:06:54.915 22:01:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:54.915 22:01:41 version -- app/version.sh@19 -- # patch=0 00:06:54.915 22:01:41 version -- app/version.sh@20 -- # get_header_version suffix 00:06:54.915 22:01:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:54.915 22:01:41 version -- app/version.sh@14 -- # cut -f2 00:06:54.915 22:01:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:54.915 22:01:41 version -- app/version.sh@20 -- # suffix=-pre 00:06:54.915 22:01:41 version -- app/version.sh@22 -- # version=24.9 00:06:54.915 22:01:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:54.915 22:01:41 version -- app/version.sh@28 -- # version=24.9rc0 00:06:54.915 22:01:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:54.915 22:01:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:54.915 22:01:41 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:54.915 22:01:41 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:54.915 00:06:54.915 real 0m0.137s 00:06:54.915 user 0m0.093s 00:06:54.915 sys 0m0.072s 00:06:54.915 22:01:41 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.915 22:01:41 version -- common/autotest_common.sh@10 -- # set +x 00:06:54.915 ************************************ 00:06:54.915 END TEST version 00:06:54.915 ************************************ 00:06:54.915 22:01:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:54.915 22:01:41 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:54.916 22:01:41 -- spdk/autotest.sh@198 -- # uname -s 00:06:54.916 22:01:41 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:54.916 22:01:41 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:54.916 22:01:41 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:54.916 22:01:41 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:54.916 22:01:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:54.916 22:01:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:54.916 22:01:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:54.916 22:01:41 -- common/autotest_common.sh@10 -- # set +x 00:06:54.916 22:01:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:54.916 22:01:41 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:54.916 22:01:41 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:54.916 22:01:41 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:54.916 22:01:41 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:54.916 22:01:41 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:54.916 22:01:41 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:54.916 22:01:41 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:54.916 22:01:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.916 22:01:41 -- common/autotest_common.sh@10 -- # set +x 00:06:54.916 ************************************ 00:06:54.916 START TEST nvmf_tcp 00:06:54.916 ************************************ 00:06:54.916 22:01:41 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:55.175 * Looking for test storage... 00:06:55.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:55.175 22:01:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:55.175 22:01:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.176 22:01:41 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.176 22:01:41 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.176 22:01:41 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.176 22:01:41 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.176 22:01:41 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.176 22:01:41 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.176 22:01:41 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:55.176 22:01:41 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:55.176 22:01:41 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:55.176 22:01:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:55.176 22:01:41 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:55.176 22:01:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:55.176 22:01:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.176 22:01:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.176 ************************************ 00:06:55.176 START TEST nvmf_example 00:06:55.176 ************************************ 00:06:55.176 22:01:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:55.176 * Looking for test storage... 00:06:55.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:55.176 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:55.177 Cannot find device "nvmf_init_br" 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:55.177 Cannot find device "nvmf_tgt_br" 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:55.177 Cannot find device "nvmf_tgt_br2" 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:55.177 Cannot find device "nvmf_init_br" 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:06:55.177 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:55.436 Cannot find device "nvmf_tgt_br" 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:55.436 Cannot find device "nvmf_tgt_br2" 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:55.436 Cannot find device "nvmf_br" 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:55.436 Cannot find device "nvmf_init_if" 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:55.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:55.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:55.436 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:55.694 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:55.694 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:55.694 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:55.694 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:55.694 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:55.694 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:55.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:06:55.695 00:06:55.695 --- 10.0.0.2 ping statistics --- 00:06:55.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.695 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:55.695 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:55.695 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:06:55.695 00:06:55.695 --- 10.0.0.3 ping statistics --- 00:06:55.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.695 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:55.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:06:55.695 00:06:55.695 --- 10.0.0.1 ping statistics --- 00:06:55.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.695 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=64997 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 64997 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 64997 ']' 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.695 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.954 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.954 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:55.954 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:55.954 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.954 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.954 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:55.954 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.954 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.954 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.954 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:55.954 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.954 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:56.212 22:01:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:08.432 Initializing NVMe Controllers 00:07:08.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:08.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:08.432 Initialization complete. Launching workers. 00:07:08.432 ======================================================== 00:07:08.432 Latency(us) 00:07:08.432 Device Information : IOPS MiB/s Average min max 00:07:08.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14017.13 54.75 4565.33 914.34 21135.45 00:07:08.432 ======================================================== 00:07:08.432 Total : 14017.13 54.75 4565.33 914.34 21135.45 00:07:08.432 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:08.432 rmmod nvme_tcp 00:07:08.432 rmmod nvme_fabrics 00:07:08.432 rmmod nvme_keyring 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 64997 ']' 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 64997 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 64997 ']' 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 64997 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64997 00:07:08.432 killing process with pid 64997 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64997' 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 64997 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 64997 00:07:08.432 nvmf threads initialize successfully 00:07:08.432 bdev subsystem init successfully 00:07:08.432 created a nvmf target service 00:07:08.432 create targets's poll groups done 00:07:08.432 all subsystems of target started 00:07:08.432 nvmf target is running 00:07:08.432 all subsystems of target stopped 00:07:08.432 destroy targets's poll groups done 00:07:08.432 destroyed the nvmf target service 00:07:08.432 bdev subsystem finish successfully 00:07:08.432 nvmf threads destroy successfully 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:08.432 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.433 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.433 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.433 22:01:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:08.433 22:01:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:08.433 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:08.433 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.433 00:07:08.433 real 0m11.567s 00:07:08.433 user 0m40.926s 00:07:08.433 sys 0m1.909s 00:07:08.433 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.433 22:01:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.433 ************************************ 00:07:08.433 END TEST nvmf_example 00:07:08.433 ************************************ 00:07:08.433 22:01:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:08.433 22:01:53 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:08.433 22:01:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:08.433 22:01:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.433 22:01:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.433 ************************************ 00:07:08.433 START TEST nvmf_filesystem 00:07:08.433 ************************************ 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:08.433 * Looking for test storage... 00:07:08.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:08.433 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:08.434 #define SPDK_CONFIG_H 00:07:08.434 #define SPDK_CONFIG_APPS 1 00:07:08.434 #define SPDK_CONFIG_ARCH native 00:07:08.434 #undef SPDK_CONFIG_ASAN 00:07:08.434 #define SPDK_CONFIG_AVAHI 1 00:07:08.434 #undef SPDK_CONFIG_CET 00:07:08.434 #define SPDK_CONFIG_COVERAGE 1 00:07:08.434 #define SPDK_CONFIG_CROSS_PREFIX 00:07:08.434 #undef SPDK_CONFIG_CRYPTO 00:07:08.434 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:08.434 #undef SPDK_CONFIG_CUSTOMOCF 00:07:08.434 #undef SPDK_CONFIG_DAOS 00:07:08.434 #define SPDK_CONFIG_DAOS_DIR 00:07:08.434 #define SPDK_CONFIG_DEBUG 1 00:07:08.434 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:08.434 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:08.434 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:08.434 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:08.434 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:08.434 #undef SPDK_CONFIG_DPDK_UADK 00:07:08.434 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:08.434 #define SPDK_CONFIG_EXAMPLES 1 00:07:08.434 #undef SPDK_CONFIG_FC 00:07:08.434 #define SPDK_CONFIG_FC_PATH 00:07:08.434 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:08.434 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:08.434 #undef SPDK_CONFIG_FUSE 00:07:08.434 #undef SPDK_CONFIG_FUZZER 00:07:08.434 #define SPDK_CONFIG_FUZZER_LIB 00:07:08.434 #define SPDK_CONFIG_GOLANG 1 00:07:08.434 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:08.434 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:08.434 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:08.434 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:08.434 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:08.434 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:08.434 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:08.434 #define SPDK_CONFIG_IDXD 1 00:07:08.434 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:08.434 #undef SPDK_CONFIG_IPSEC_MB 00:07:08.434 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:08.434 #define SPDK_CONFIG_ISAL 1 00:07:08.434 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:08.434 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:08.434 #define SPDK_CONFIG_LIBDIR 00:07:08.434 #undef SPDK_CONFIG_LTO 00:07:08.434 #define SPDK_CONFIG_MAX_LCORES 128 00:07:08.434 #define SPDK_CONFIG_NVME_CUSE 1 00:07:08.434 #undef SPDK_CONFIG_OCF 00:07:08.434 #define SPDK_CONFIG_OCF_PATH 00:07:08.434 #define SPDK_CONFIG_OPENSSL_PATH 00:07:08.434 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:08.434 #define SPDK_CONFIG_PGO_DIR 00:07:08.434 #undef SPDK_CONFIG_PGO_USE 00:07:08.434 #define SPDK_CONFIG_PREFIX /usr/local 00:07:08.434 #undef SPDK_CONFIG_RAID5F 00:07:08.434 #undef SPDK_CONFIG_RBD 00:07:08.434 #define SPDK_CONFIG_RDMA 1 00:07:08.434 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:08.434 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:08.434 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:08.434 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:08.434 #define SPDK_CONFIG_SHARED 1 00:07:08.434 #undef SPDK_CONFIG_SMA 00:07:08.434 #define SPDK_CONFIG_TESTS 1 00:07:08.434 #undef SPDK_CONFIG_TSAN 00:07:08.434 #define SPDK_CONFIG_UBLK 1 00:07:08.434 #define SPDK_CONFIG_UBSAN 1 00:07:08.434 #undef SPDK_CONFIG_UNIT_TESTS 00:07:08.434 #undef SPDK_CONFIG_URING 00:07:08.434 #define SPDK_CONFIG_URING_PATH 00:07:08.434 #undef SPDK_CONFIG_URING_ZNS 00:07:08.434 #define SPDK_CONFIG_USDT 1 00:07:08.434 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:08.434 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:08.434 #undef SPDK_CONFIG_VFIO_USER 00:07:08.434 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:08.434 #define SPDK_CONFIG_VHOST 1 00:07:08.434 #define SPDK_CONFIG_VIRTIO 1 00:07:08.434 #undef SPDK_CONFIG_VTUNE 00:07:08.434 #define SPDK_CONFIG_VTUNE_DIR 00:07:08.434 #define SPDK_CONFIG_WERROR 1 00:07:08.434 #define SPDK_CONFIG_WPDK_DIR 00:07:08.434 #undef SPDK_CONFIG_XNVME 00:07:08.434 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:08.434 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:08.435 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65224 ]] 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 65224 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.CRm1VS 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.CRm1VS/tests/target /tmp/spdk.CRm1VS 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264516608 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13785927680 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5244383232 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13785927680 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5244383232 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267756544 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=135168 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=94879305728 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4823474176 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:08.436 * Looking for test storage... 00:07:08.436 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13785927680 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:08.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:08.437 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:08.438 Cannot find device "nvmf_tgt_br" 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:08.438 Cannot find device "nvmf_tgt_br2" 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:08.438 Cannot find device "nvmf_tgt_br" 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:08.438 Cannot find device "nvmf_tgt_br2" 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:08.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:08.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:08.438 22:01:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:08.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:07:08.438 00:07:08.438 --- 10.0.0.2 ping statistics --- 00:07:08.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.438 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:08.438 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:08.438 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:07:08.438 00:07:08.438 --- 10.0.0.3 ping statistics --- 00:07:08.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.438 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:08.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:07:08.438 00:07:08.438 --- 10.0.0.1 ping statistics --- 00:07:08.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.438 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.438 ************************************ 00:07:08.438 START TEST nvmf_filesystem_no_in_capsule 00:07:08.438 ************************************ 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65383 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65383 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65383 ']' 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.438 22:01:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.438 [2024-07-15 22:01:54.181480] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:07:08.438 [2024-07-15 22:01:54.181587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.438 [2024-07-15 22:01:54.328312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.438 [2024-07-15 22:01:54.395926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.438 [2024-07-15 22:01:54.395993] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.438 [2024-07-15 22:01:54.396004] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.438 [2024-07-15 22:01:54.396012] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.438 [2024-07-15 22:01:54.396020] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.438 [2024-07-15 22:01:54.396125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.438 [2024-07-15 22:01:54.396175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.438 [2024-07-15 22:01:54.396656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.438 [2024-07-15 22:01:54.396692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.438 [2024-07-15 22:01:55.190624] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.438 Malloc1 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:08.438 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.439 [2024-07-15 22:01:55.333166] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:08.439 { 00:07:08.439 "aliases": [ 00:07:08.439 "46246f2c-a1be-4689-986e-44cb8ccbb197" 00:07:08.439 ], 00:07:08.439 "assigned_rate_limits": { 00:07:08.439 "r_mbytes_per_sec": 0, 00:07:08.439 "rw_ios_per_sec": 0, 00:07:08.439 "rw_mbytes_per_sec": 0, 00:07:08.439 "w_mbytes_per_sec": 0 00:07:08.439 }, 00:07:08.439 "block_size": 512, 00:07:08.439 "claim_type": "exclusive_write", 00:07:08.439 "claimed": true, 00:07:08.439 "driver_specific": {}, 00:07:08.439 "memory_domains": [ 00:07:08.439 { 00:07:08.439 "dma_device_id": "system", 00:07:08.439 "dma_device_type": 1 00:07:08.439 }, 00:07:08.439 { 00:07:08.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.439 "dma_device_type": 2 00:07:08.439 } 00:07:08.439 ], 00:07:08.439 "name": "Malloc1", 00:07:08.439 "num_blocks": 1048576, 00:07:08.439 "product_name": "Malloc disk", 00:07:08.439 "supported_io_types": { 00:07:08.439 "abort": true, 00:07:08.439 "compare": false, 00:07:08.439 "compare_and_write": false, 00:07:08.439 "copy": true, 00:07:08.439 "flush": true, 00:07:08.439 "get_zone_info": false, 00:07:08.439 "nvme_admin": false, 00:07:08.439 "nvme_io": false, 00:07:08.439 "nvme_io_md": false, 00:07:08.439 "nvme_iov_md": false, 00:07:08.439 "read": true, 00:07:08.439 "reset": true, 00:07:08.439 "seek_data": false, 00:07:08.439 "seek_hole": false, 00:07:08.439 "unmap": true, 00:07:08.439 "write": true, 00:07:08.439 "write_zeroes": true, 00:07:08.439 "zcopy": true, 00:07:08.439 "zone_append": false, 00:07:08.439 "zone_management": false 00:07:08.439 }, 00:07:08.439 "uuid": "46246f2c-a1be-4689-986e-44cb8ccbb197", 00:07:08.439 "zoned": false 00:07:08.439 } 00:07:08.439 ]' 00:07:08.439 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:08.698 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:08.698 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:08.698 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:08.698 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:08.698 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:08.698 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:08.698 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.698 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:08.698 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:08.698 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:08.698 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:08.698 22:01:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:11.228 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:11.229 22:01:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.163 ************************************ 00:07:12.163 START TEST filesystem_ext4 00:07:12.163 ************************************ 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:12.163 mke2fs 1.46.5 (30-Dec-2021) 00:07:12.163 Discarding device blocks: 0/522240 done 00:07:12.163 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:12.163 Filesystem UUID: 9b634635-9c6f-4f73-ae47-f5fa54cd7181 00:07:12.163 Superblock backups stored on blocks: 00:07:12.163 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:12.163 00:07:12.163 Allocating group tables: 0/64 done 00:07:12.163 Writing inode tables: 0/64 done 00:07:12.163 Creating journal (8192 blocks): done 00:07:12.163 Writing superblocks and filesystem accounting information: 0/64 done 00:07:12.163 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:12.163 22:01:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:12.163 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:12.163 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:12.163 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:12.163 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:12.163 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:12.163 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65383 00:07:12.163 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:12.163 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:12.163 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:12.163 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:12.163 00:07:12.163 real 0m0.306s 00:07:12.163 user 0m0.017s 00:07:12.163 sys 0m0.053s 00:07:12.163 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.163 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:12.163 ************************************ 00:07:12.163 END TEST filesystem_ext4 00:07:12.163 ************************************ 00:07:12.421 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:12.421 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:12.421 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:12.421 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.422 ************************************ 00:07:12.422 START TEST filesystem_btrfs 00:07:12.422 ************************************ 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:12.422 btrfs-progs v6.6.2 00:07:12.422 See https://btrfs.readthedocs.io for more information. 00:07:12.422 00:07:12.422 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:12.422 NOTE: several default settings have changed in version 5.15, please make sure 00:07:12.422 this does not affect your deployments: 00:07:12.422 - DUP for metadata (-m dup) 00:07:12.422 - enabled no-holes (-O no-holes) 00:07:12.422 - enabled free-space-tree (-R free-space-tree) 00:07:12.422 00:07:12.422 Label: (null) 00:07:12.422 UUID: e10c4c44-92ce-470f-9ec7-e4080034de11 00:07:12.422 Node size: 16384 00:07:12.422 Sector size: 4096 00:07:12.422 Filesystem size: 510.00MiB 00:07:12.422 Block group profiles: 00:07:12.422 Data: single 8.00MiB 00:07:12.422 Metadata: DUP 32.00MiB 00:07:12.422 System: DUP 8.00MiB 00:07:12.422 SSD detected: yes 00:07:12.422 Zoned device: no 00:07:12.422 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:12.422 Runtime features: free-space-tree 00:07:12.422 Checksum: crc32c 00:07:12.422 Number of devices: 1 00:07:12.422 Devices: 00:07:12.422 ID SIZE PATH 00:07:12.422 1 510.00MiB /dev/nvme0n1p1 00:07:12.422 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65383 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:12.422 ************************************ 00:07:12.422 END TEST filesystem_btrfs 00:07:12.422 ************************************ 00:07:12.422 00:07:12.422 real 0m0.203s 00:07:12.422 user 0m0.020s 00:07:12.422 sys 0m0.064s 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.422 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.680 ************************************ 00:07:12.680 START TEST filesystem_xfs 00:07:12.680 ************************************ 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:12.680 22:01:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:12.680 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:12.680 = sectsz=512 attr=2, projid32bit=1 00:07:12.680 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:12.680 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:12.680 data = bsize=4096 blocks=130560, imaxpct=25 00:07:12.680 = sunit=0 swidth=0 blks 00:07:12.680 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:12.680 log =internal log bsize=4096 blocks=16384, version=2 00:07:12.680 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:12.680 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:13.244 Discarding blocks...Done. 00:07:13.244 22:02:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:13.244 22:02:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65383 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:15.771 ************************************ 00:07:15.771 END TEST filesystem_xfs 00:07:15.771 ************************************ 00:07:15.771 00:07:15.771 real 0m3.081s 00:07:15.771 user 0m0.018s 00:07:15.771 sys 0m0.050s 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:15.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:15.771 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65383 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65383 ']' 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65383 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65383 00:07:15.772 killing process with pid 65383 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65383' 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 65383 00:07:15.772 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 65383 00:07:16.030 ************************************ 00:07:16.030 END TEST nvmf_filesystem_no_in_capsule 00:07:16.030 ************************************ 00:07:16.030 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:16.030 00:07:16.030 real 0m8.873s 00:07:16.030 user 0m33.063s 00:07:16.030 sys 0m1.709s 00:07:16.030 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.030 22:02:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.288 ************************************ 00:07:16.288 START TEST nvmf_filesystem_in_capsule 00:07:16.288 ************************************ 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65695 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65695 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65695 ']' 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.288 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.288 [2024-07-15 22:02:03.080048] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:07:16.288 [2024-07-15 22:02:03.080152] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.288 [2024-07-15 22:02:03.213689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.546 [2024-07-15 22:02:03.284833] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.546 [2024-07-15 22:02:03.284888] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.546 [2024-07-15 22:02:03.284900] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.546 [2024-07-15 22:02:03.284911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.546 [2024-07-15 22:02:03.284921] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.546 [2024-07-15 22:02:03.285035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.546 [2024-07-15 22:02:03.285120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.546 [2024-07-15 22:02:03.285164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.546 [2024-07-15 22:02:03.285170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.546 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.546 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:16.546 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:16.546 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:16.546 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.546 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.546 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:16.546 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:16.546 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.546 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.546 [2024-07-15 22:02:03.416875] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.546 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.546 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:16.546 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.546 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.804 Malloc1 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.804 [2024-07-15 22:02:03.549426] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:16.804 { 00:07:16.804 "aliases": [ 00:07:16.804 "3b0208c9-a728-451c-9b67-a4efb14146ae" 00:07:16.804 ], 00:07:16.804 "assigned_rate_limits": { 00:07:16.804 "r_mbytes_per_sec": 0, 00:07:16.804 "rw_ios_per_sec": 0, 00:07:16.804 "rw_mbytes_per_sec": 0, 00:07:16.804 "w_mbytes_per_sec": 0 00:07:16.804 }, 00:07:16.804 "block_size": 512, 00:07:16.804 "claim_type": "exclusive_write", 00:07:16.804 "claimed": true, 00:07:16.804 "driver_specific": {}, 00:07:16.804 "memory_domains": [ 00:07:16.804 { 00:07:16.804 "dma_device_id": "system", 00:07:16.804 "dma_device_type": 1 00:07:16.804 }, 00:07:16.804 { 00:07:16.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.804 "dma_device_type": 2 00:07:16.804 } 00:07:16.804 ], 00:07:16.804 "name": "Malloc1", 00:07:16.804 "num_blocks": 1048576, 00:07:16.804 "product_name": "Malloc disk", 00:07:16.804 "supported_io_types": { 00:07:16.804 "abort": true, 00:07:16.804 "compare": false, 00:07:16.804 "compare_and_write": false, 00:07:16.804 "copy": true, 00:07:16.804 "flush": true, 00:07:16.804 "get_zone_info": false, 00:07:16.804 "nvme_admin": false, 00:07:16.804 "nvme_io": false, 00:07:16.804 "nvme_io_md": false, 00:07:16.804 "nvme_iov_md": false, 00:07:16.804 "read": true, 00:07:16.804 "reset": true, 00:07:16.804 "seek_data": false, 00:07:16.804 "seek_hole": false, 00:07:16.804 "unmap": true, 00:07:16.804 "write": true, 00:07:16.804 "write_zeroes": true, 00:07:16.804 "zcopy": true, 00:07:16.804 "zone_append": false, 00:07:16.804 "zone_management": false 00:07:16.804 }, 00:07:16.804 "uuid": "3b0208c9-a728-451c-9b67-a4efb14146ae", 00:07:16.804 "zoned": false 00:07:16.804 } 00:07:16.804 ]' 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:16.804 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.061 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:17.061 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:17.061 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:17.061 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:17.061 22:02:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:18.959 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:19.216 22:02:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.144 ************************************ 00:07:20.144 START TEST filesystem_in_capsule_ext4 00:07:20.144 ************************************ 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:20.144 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:20.144 mke2fs 1.46.5 (30-Dec-2021) 00:07:20.144 Discarding device blocks: 0/522240 done 00:07:20.144 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:20.144 Filesystem UUID: 417af9cb-00af-46fe-9311-fb9d73fafa0a 00:07:20.144 Superblock backups stored on blocks: 00:07:20.144 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:20.144 00:07:20.144 Allocating group tables: 0/64 done 00:07:20.144 Writing inode tables: 0/64 done 00:07:20.144 Creating journal (8192 blocks): done 00:07:20.401 Writing superblocks and filesystem accounting information: 0/64 done 00:07:20.401 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65695 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:20.401 00:07:20.401 real 0m0.311s 00:07:20.401 user 0m0.023s 00:07:20.401 sys 0m0.044s 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.401 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:20.401 ************************************ 00:07:20.401 END TEST filesystem_in_capsule_ext4 00:07:20.401 ************************************ 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.660 ************************************ 00:07:20.660 START TEST filesystem_in_capsule_btrfs 00:07:20.660 ************************************ 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:20.660 btrfs-progs v6.6.2 00:07:20.660 See https://btrfs.readthedocs.io for more information. 00:07:20.660 00:07:20.660 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:20.660 NOTE: several default settings have changed in version 5.15, please make sure 00:07:20.660 this does not affect your deployments: 00:07:20.660 - DUP for metadata (-m dup) 00:07:20.660 - enabled no-holes (-O no-holes) 00:07:20.660 - enabled free-space-tree (-R free-space-tree) 00:07:20.660 00:07:20.660 Label: (null) 00:07:20.660 UUID: a91a10ec-90f8-493f-869a-bccd4b353c8b 00:07:20.660 Node size: 16384 00:07:20.660 Sector size: 4096 00:07:20.660 Filesystem size: 510.00MiB 00:07:20.660 Block group profiles: 00:07:20.660 Data: single 8.00MiB 00:07:20.660 Metadata: DUP 32.00MiB 00:07:20.660 System: DUP 8.00MiB 00:07:20.660 SSD detected: yes 00:07:20.660 Zoned device: no 00:07:20.660 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:20.660 Runtime features: free-space-tree 00:07:20.660 Checksum: crc32c 00:07:20.660 Number of devices: 1 00:07:20.660 Devices: 00:07:20.660 ID SIZE PATH 00:07:20.660 1 510.00MiB /dev/nvme0n1p1 00:07:20.660 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:20.660 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:20.916 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65695 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:20.917 ************************************ 00:07:20.917 END TEST filesystem_in_capsule_btrfs 00:07:20.917 ************************************ 00:07:20.917 00:07:20.917 real 0m0.271s 00:07:20.917 user 0m0.025s 00:07:20.917 sys 0m0.058s 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.917 ************************************ 00:07:20.917 START TEST filesystem_in_capsule_xfs 00:07:20.917 ************************************ 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:20.917 22:02:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:20.917 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:20.917 = sectsz=512 attr=2, projid32bit=1 00:07:20.917 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:20.917 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:20.917 data = bsize=4096 blocks=130560, imaxpct=25 00:07:20.917 = sunit=0 swidth=0 blks 00:07:20.917 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:20.917 log =internal log bsize=4096 blocks=16384, version=2 00:07:20.917 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:20.917 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:21.480 Discarding blocks...Done. 00:07:21.480 22:02:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:21.480 22:02:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65695 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:23.374 ************************************ 00:07:23.374 END TEST filesystem_in_capsule_xfs 00:07:23.374 ************************************ 00:07:23.374 00:07:23.374 real 0m2.579s 00:07:23.374 user 0m0.019s 00:07:23.374 sys 0m0.049s 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:23.374 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:23.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65695 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65695 ']' 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65695 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65695 00:07:23.631 killing process with pid 65695 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65695' 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 65695 00:07:23.631 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 65695 00:07:24.197 ************************************ 00:07:24.197 END TEST nvmf_filesystem_in_capsule 00:07:24.197 ************************************ 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:24.197 00:07:24.197 real 0m7.864s 00:07:24.197 user 0m29.191s 00:07:24.197 sys 0m1.497s 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:24.197 rmmod nvme_tcp 00:07:24.197 rmmod nvme_fabrics 00:07:24.197 rmmod nvme_keyring 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.197 22:02:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.197 22:02:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:24.197 00:07:24.197 real 0m17.448s 00:07:24.197 user 1m2.468s 00:07:24.197 sys 0m3.550s 00:07:24.197 22:02:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.197 ************************************ 00:07:24.197 END TEST nvmf_filesystem 00:07:24.197 ************************************ 00:07:24.197 22:02:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.197 22:02:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:24.197 22:02:11 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:24.197 22:02:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:24.197 22:02:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.197 22:02:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.197 ************************************ 00:07:24.197 START TEST nvmf_target_discovery 00:07:24.197 ************************************ 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:24.197 * Looking for test storage... 00:07:24.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.197 22:02:11 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.198 22:02:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.198 22:02:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.198 22:02:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.198 22:02:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:24.198 22:02:11 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.198 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:24.198 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.198 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:24.456 Cannot find device "nvmf_tgt_br" 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:24.456 Cannot find device "nvmf_tgt_br2" 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:24.456 Cannot find device "nvmf_tgt_br" 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:24.456 Cannot find device "nvmf_tgt_br2" 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:24.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:24.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:24.456 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:24.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:07:24.715 00:07:24.715 --- 10.0.0.2 ping statistics --- 00:07:24.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.715 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:24.715 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:24.715 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:07:24.715 00:07:24.715 --- 10.0.0.3 ping statistics --- 00:07:24.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.715 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:24.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:07:24.715 00:07:24.715 --- 10.0.0.1 ping statistics --- 00:07:24.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.715 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=66137 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 66137 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 66137 ']' 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:24.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:24.715 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:24.715 [2024-07-15 22:02:11.566736] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:07:24.715 [2024-07-15 22:02:11.566829] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.973 [2024-07-15 22:02:11.706327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.973 [2024-07-15 22:02:11.789912] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.973 [2024-07-15 22:02:11.789967] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.973 [2024-07-15 22:02:11.789978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.973 [2024-07-15 22:02:11.789987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.973 [2024-07-15 22:02:11.789994] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.973 [2024-07-15 22:02:11.790077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.973 [2024-07-15 22:02:11.790170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.973 [2024-07-15 22:02:11.790611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.973 [2024-07-15 22:02:11.790625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.231 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.231 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:25.231 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:25.231 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:25.231 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.231 22:02:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.231 22:02:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:25.231 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.231 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.231 [2024-07-15 22:02:11.971379] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.231 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.231 22:02:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:25.231 22:02:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:25.231 22:02:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:25.232 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 Null1 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 [2024-07-15 22:02:12.026365] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 Null2 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 Null3 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 Null4 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.232 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -a 10.0.0.2 -s 4420 00:07:25.491 00:07:25.491 Discovery Log Number of Records 6, Generation counter 6 00:07:25.491 =====Discovery Log Entry 0====== 00:07:25.491 trtype: tcp 00:07:25.491 adrfam: ipv4 00:07:25.491 subtype: current discovery subsystem 00:07:25.491 treq: not required 00:07:25.491 portid: 0 00:07:25.491 trsvcid: 4420 00:07:25.491 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:25.491 traddr: 10.0.0.2 00:07:25.492 eflags: explicit discovery connections, duplicate discovery information 00:07:25.492 sectype: none 00:07:25.492 =====Discovery Log Entry 1====== 00:07:25.492 trtype: tcp 00:07:25.492 adrfam: ipv4 00:07:25.492 subtype: nvme subsystem 00:07:25.492 treq: not required 00:07:25.492 portid: 0 00:07:25.492 trsvcid: 4420 00:07:25.492 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:25.492 traddr: 10.0.0.2 00:07:25.492 eflags: none 00:07:25.492 sectype: none 00:07:25.492 =====Discovery Log Entry 2====== 00:07:25.492 trtype: tcp 00:07:25.492 adrfam: ipv4 00:07:25.492 subtype: nvme subsystem 00:07:25.492 treq: not required 00:07:25.492 portid: 0 00:07:25.492 trsvcid: 4420 00:07:25.492 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:25.492 traddr: 10.0.0.2 00:07:25.492 eflags: none 00:07:25.492 sectype: none 00:07:25.492 =====Discovery Log Entry 3====== 00:07:25.492 trtype: tcp 00:07:25.492 adrfam: ipv4 00:07:25.492 subtype: nvme subsystem 00:07:25.492 treq: not required 00:07:25.492 portid: 0 00:07:25.492 trsvcid: 4420 00:07:25.492 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:25.492 traddr: 10.0.0.2 00:07:25.492 eflags: none 00:07:25.492 sectype: none 00:07:25.492 =====Discovery Log Entry 4====== 00:07:25.492 trtype: tcp 00:07:25.492 adrfam: ipv4 00:07:25.492 subtype: nvme subsystem 00:07:25.492 treq: not required 00:07:25.492 portid: 0 00:07:25.492 trsvcid: 4420 00:07:25.492 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:25.492 traddr: 10.0.0.2 00:07:25.492 eflags: none 00:07:25.492 sectype: none 00:07:25.492 =====Discovery Log Entry 5====== 00:07:25.492 trtype: tcp 00:07:25.492 adrfam: ipv4 00:07:25.492 subtype: discovery subsystem referral 00:07:25.492 treq: not required 00:07:25.492 portid: 0 00:07:25.492 trsvcid: 4430 00:07:25.492 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:25.492 traddr: 10.0.0.2 00:07:25.492 eflags: none 00:07:25.492 sectype: none 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:25.492 Perform nvmf subsystem discovery via RPC 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.492 [ 00:07:25.492 { 00:07:25.492 "allow_any_host": true, 00:07:25.492 "hosts": [], 00:07:25.492 "listen_addresses": [ 00:07:25.492 { 00:07:25.492 "adrfam": "IPv4", 00:07:25.492 "traddr": "10.0.0.2", 00:07:25.492 "trsvcid": "4420", 00:07:25.492 "trtype": "TCP" 00:07:25.492 } 00:07:25.492 ], 00:07:25.492 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:25.492 "subtype": "Discovery" 00:07:25.492 }, 00:07:25.492 { 00:07:25.492 "allow_any_host": true, 00:07:25.492 "hosts": [], 00:07:25.492 "listen_addresses": [ 00:07:25.492 { 00:07:25.492 "adrfam": "IPv4", 00:07:25.492 "traddr": "10.0.0.2", 00:07:25.492 "trsvcid": "4420", 00:07:25.492 "trtype": "TCP" 00:07:25.492 } 00:07:25.492 ], 00:07:25.492 "max_cntlid": 65519, 00:07:25.492 "max_namespaces": 32, 00:07:25.492 "min_cntlid": 1, 00:07:25.492 "model_number": "SPDK bdev Controller", 00:07:25.492 "namespaces": [ 00:07:25.492 { 00:07:25.492 "bdev_name": "Null1", 00:07:25.492 "name": "Null1", 00:07:25.492 "nguid": "CB65EA51631A466C8A13CBC49C445E20", 00:07:25.492 "nsid": 1, 00:07:25.492 "uuid": "cb65ea51-631a-466c-8a13-cbc49c445e20" 00:07:25.492 } 00:07:25.492 ], 00:07:25.492 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:25.492 "serial_number": "SPDK00000000000001", 00:07:25.492 "subtype": "NVMe" 00:07:25.492 }, 00:07:25.492 { 00:07:25.492 "allow_any_host": true, 00:07:25.492 "hosts": [], 00:07:25.492 "listen_addresses": [ 00:07:25.492 { 00:07:25.492 "adrfam": "IPv4", 00:07:25.492 "traddr": "10.0.0.2", 00:07:25.492 "trsvcid": "4420", 00:07:25.492 "trtype": "TCP" 00:07:25.492 } 00:07:25.492 ], 00:07:25.492 "max_cntlid": 65519, 00:07:25.492 "max_namespaces": 32, 00:07:25.492 "min_cntlid": 1, 00:07:25.492 "model_number": "SPDK bdev Controller", 00:07:25.492 "namespaces": [ 00:07:25.492 { 00:07:25.492 "bdev_name": "Null2", 00:07:25.492 "name": "Null2", 00:07:25.492 "nguid": "F4980A57BB1D412D9DEEE3C2C4509565", 00:07:25.492 "nsid": 1, 00:07:25.492 "uuid": "f4980a57-bb1d-412d-9dee-e3c2c4509565" 00:07:25.492 } 00:07:25.492 ], 00:07:25.492 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:25.492 "serial_number": "SPDK00000000000002", 00:07:25.492 "subtype": "NVMe" 00:07:25.492 }, 00:07:25.492 { 00:07:25.492 "allow_any_host": true, 00:07:25.492 "hosts": [], 00:07:25.492 "listen_addresses": [ 00:07:25.492 { 00:07:25.492 "adrfam": "IPv4", 00:07:25.492 "traddr": "10.0.0.2", 00:07:25.492 "trsvcid": "4420", 00:07:25.492 "trtype": "TCP" 00:07:25.492 } 00:07:25.492 ], 00:07:25.492 "max_cntlid": 65519, 00:07:25.492 "max_namespaces": 32, 00:07:25.492 "min_cntlid": 1, 00:07:25.492 "model_number": "SPDK bdev Controller", 00:07:25.492 "namespaces": [ 00:07:25.492 { 00:07:25.492 "bdev_name": "Null3", 00:07:25.492 "name": "Null3", 00:07:25.492 "nguid": "115F48854B864B0A940E7EBDA2471BB9", 00:07:25.492 "nsid": 1, 00:07:25.492 "uuid": "115f4885-4b86-4b0a-940e-7ebda2471bb9" 00:07:25.492 } 00:07:25.492 ], 00:07:25.492 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:25.492 "serial_number": "SPDK00000000000003", 00:07:25.492 "subtype": "NVMe" 00:07:25.492 }, 00:07:25.492 { 00:07:25.492 "allow_any_host": true, 00:07:25.492 "hosts": [], 00:07:25.492 "listen_addresses": [ 00:07:25.492 { 00:07:25.492 "adrfam": "IPv4", 00:07:25.492 "traddr": "10.0.0.2", 00:07:25.492 "trsvcid": "4420", 00:07:25.492 "trtype": "TCP" 00:07:25.492 } 00:07:25.492 ], 00:07:25.492 "max_cntlid": 65519, 00:07:25.492 "max_namespaces": 32, 00:07:25.492 "min_cntlid": 1, 00:07:25.492 "model_number": "SPDK bdev Controller", 00:07:25.492 "namespaces": [ 00:07:25.492 { 00:07:25.492 "bdev_name": "Null4", 00:07:25.492 "name": "Null4", 00:07:25.492 "nguid": "236ABAF907E74BCC9AA724493B1FB221", 00:07:25.492 "nsid": 1, 00:07:25.492 "uuid": "236abaf9-07e7-4bcc-9aa7-24493b1fb221" 00:07:25.492 } 00:07:25.492 ], 00:07:25.492 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:25.492 "serial_number": "SPDK00000000000004", 00:07:25.492 "subtype": "NVMe" 00:07:25.492 } 00:07:25.492 ] 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.492 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:25.493 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:25.493 rmmod nvme_tcp 00:07:25.493 rmmod nvme_fabrics 00:07:25.493 rmmod nvme_keyring 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 66137 ']' 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 66137 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 66137 ']' 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 66137 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66137 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:25.751 killing process with pid 66137 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66137' 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 66137 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 66137 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:25.751 ************************************ 00:07:25.751 END TEST nvmf_target_discovery 00:07:25.751 ************************************ 00:07:25.751 00:07:25.751 real 0m1.619s 00:07:25.751 user 0m3.610s 00:07:25.751 sys 0m0.516s 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.751 22:02:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:26.009 22:02:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:26.009 22:02:12 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:26.009 22:02:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:26.009 22:02:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.009 22:02:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:26.009 ************************************ 00:07:26.009 START TEST nvmf_referrals 00:07:26.009 ************************************ 00:07:26.009 22:02:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:26.009 * Looking for test storage... 00:07:26.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:26.009 22:02:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.009 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:26.009 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.009 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.009 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.009 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.009 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.009 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.009 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.009 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.009 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.009 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:26.010 Cannot find device "nvmf_tgt_br" 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:26.010 Cannot find device "nvmf_tgt_br2" 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:26.010 Cannot find device "nvmf_tgt_br" 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:26.010 Cannot find device "nvmf_tgt_br2" 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:26.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:26.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:07:26.010 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:26.268 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:26.268 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:26.268 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:26.268 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:26.268 22:02:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:26.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:07:26.268 00:07:26.268 --- 10.0.0.2 ping statistics --- 00:07:26.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.268 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:26.268 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:26.268 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:07:26.268 00:07:26.268 --- 10.0.0.3 ping statistics --- 00:07:26.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.268 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:26.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:07:26.268 00:07:26.268 --- 10.0.0.1 ping statistics --- 00:07:26.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.268 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:26.268 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:26.269 22:02:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.269 22:02:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:26.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.269 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=66340 00:07:26.269 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:26.269 22:02:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 66340 00:07:26.269 22:02:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 66340 ']' 00:07:26.269 22:02:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.269 22:02:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.269 22:02:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.269 22:02:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.269 22:02:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:26.526 [2024-07-15 22:02:13.258684] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:07:26.526 [2024-07-15 22:02:13.258816] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.526 [2024-07-15 22:02:13.404781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.853 [2024-07-15 22:02:13.484219] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.853 [2024-07-15 22:02:13.484535] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.853 [2024-07-15 22:02:13.484717] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.853 [2024-07-15 22:02:13.484923] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.853 [2024-07-15 22:02:13.485096] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.853 [2024-07-15 22:02:13.485382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.853 [2024-07-15 22:02:13.485437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.853 [2024-07-15 22:02:13.485500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.853 [2024-07-15 22:02:13.485492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.473 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.473 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:27.473 22:02:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.473 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.474 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.474 22:02:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.474 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.474 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.474 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.474 [2024-07-15 22:02:14.397636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.732 [2024-07-15 22:02:14.430024] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:27.732 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:27.991 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:28.249 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:28.249 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:28.249 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:28.249 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:28.249 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:28.249 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.249 22:02:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:28.249 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:28.508 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:28.767 rmmod nvme_tcp 00:07:28.767 rmmod nvme_fabrics 00:07:28.767 rmmod nvme_keyring 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 66340 ']' 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 66340 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 66340 ']' 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 66340 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66340 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:28.767 killing process with pid 66340 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66340' 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 66340 00:07:28.767 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 66340 00:07:29.026 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:29.026 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:29.026 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:29.026 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:29.026 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:29.026 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.026 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.026 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.026 22:02:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:29.026 00:07:29.026 real 0m3.132s 00:07:29.026 user 0m10.369s 00:07:29.026 sys 0m0.761s 00:07:29.026 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.026 22:02:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:29.026 ************************************ 00:07:29.026 END TEST nvmf_referrals 00:07:29.026 ************************************ 00:07:29.026 22:02:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:29.026 22:02:15 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:29.026 22:02:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:29.026 22:02:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.026 22:02:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.026 ************************************ 00:07:29.026 START TEST nvmf_connect_disconnect 00:07:29.026 ************************************ 00:07:29.026 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:29.026 * Looking for test storage... 00:07:29.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:29.285 22:02:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:29.286 Cannot find device "nvmf_tgt_br" 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.286 Cannot find device "nvmf_tgt_br2" 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:29.286 Cannot find device "nvmf_tgt_br" 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:29.286 Cannot find device "nvmf_tgt_br2" 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:29.286 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:29.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:07:29.545 00:07:29.545 --- 10.0.0.2 ping statistics --- 00:07:29.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.545 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:29.545 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:29.545 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:07:29.545 00:07:29.545 --- 10.0.0.3 ping statistics --- 00:07:29.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.545 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:29.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:07:29.545 00:07:29.545 --- 10.0.0.1 ping statistics --- 00:07:29.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.545 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66644 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66644 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 66644 ']' 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.545 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:29.545 [2024-07-15 22:02:16.433494] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:07:29.545 [2024-07-15 22:02:16.433604] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.803 [2024-07-15 22:02:16.569791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.803 [2024-07-15 22:02:16.637821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.803 [2024-07-15 22:02:16.637881] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.803 [2024-07-15 22:02:16.637893] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.803 [2024-07-15 22:02:16.637901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.803 [2024-07-15 22:02:16.637908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.803 [2024-07-15 22:02:16.637970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.803 [2024-07-15 22:02:16.638071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.803 [2024-07-15 22:02:16.638505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.803 [2024-07-15 22:02:16.638538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.803 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.803 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:29.803 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:29.803 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:29.803 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:30.067 [2024-07-15 22:02:16.770393] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:30.067 [2024-07-15 22:02:16.836940] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:30.067 22:02:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:32.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:41.480 rmmod nvme_tcp 00:07:41.480 rmmod nvme_fabrics 00:07:41.480 rmmod nvme_keyring 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66644 ']' 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66644 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 66644 ']' 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 66644 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66644 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:41.480 killing process with pid 66644 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66644' 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 66644 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 66644 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:41.480 00:07:41.480 real 0m12.496s 00:07:41.480 user 0m45.086s 00:07:41.480 sys 0m1.998s 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.480 ************************************ 00:07:41.480 22:02:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:41.480 END TEST nvmf_connect_disconnect 00:07:41.480 ************************************ 00:07:41.741 22:02:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:41.741 22:02:28 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:41.741 22:02:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:41.741 22:02:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.741 22:02:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.741 ************************************ 00:07:41.741 START TEST nvmf_multitarget 00:07:41.741 ************************************ 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:41.741 * Looking for test storage... 00:07:41.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:41.741 Cannot find device "nvmf_tgt_br" 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:41.741 Cannot find device "nvmf_tgt_br2" 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:07:41.741 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:41.742 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:41.742 Cannot find device "nvmf_tgt_br" 00:07:41.742 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:07:41.742 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:42.028 Cannot find device "nvmf_tgt_br2" 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:42.028 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:42.028 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:42.028 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:42.287 22:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:42.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:07:42.287 00:07:42.287 --- 10.0.0.2 ping statistics --- 00:07:42.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.287 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:42.287 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:42.287 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:07:42.287 00:07:42.287 --- 10.0.0.3 ping statistics --- 00:07:42.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.287 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:42.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:07:42.287 00:07:42.287 --- 10.0.0.1 ping statistics --- 00:07:42.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.287 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=67033 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 67033 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 67033 ']' 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.287 22:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:42.287 [2024-07-15 22:02:29.121964] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:07:42.287 [2024-07-15 22:02:29.122060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.545 [2024-07-15 22:02:29.255885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.545 [2024-07-15 22:02:29.317289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.545 [2024-07-15 22:02:29.317350] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.545 [2024-07-15 22:02:29.317361] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.545 [2024-07-15 22:02:29.317370] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.545 [2024-07-15 22:02:29.317377] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.545 [2024-07-15 22:02:29.318290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.545 [2024-07-15 22:02:29.318373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.545 [2024-07-15 22:02:29.318707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.545 [2024-07-15 22:02:29.318778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.477 22:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.477 22:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:43.477 22:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:43.477 22:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.477 22:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:43.477 22:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.477 22:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:43.477 22:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:43.477 22:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:43.735 22:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:43.735 22:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:43.735 "nvmf_tgt_1" 00:07:43.735 22:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:43.992 "nvmf_tgt_2" 00:07:43.992 22:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:43.992 22:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:44.250 22:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:44.250 22:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:44.250 true 00:07:44.250 22:02:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:44.547 true 00:07:44.547 22:02:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:44.547 22:02:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:44.547 22:02:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:44.547 22:02:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:44.547 22:02:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:44.547 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:44.547 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.805 rmmod nvme_tcp 00:07:44.805 rmmod nvme_fabrics 00:07:44.805 rmmod nvme_keyring 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 67033 ']' 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 67033 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 67033 ']' 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 67033 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67033 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.805 killing process with pid 67033 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67033' 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 67033 00:07:44.805 22:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 67033 00:07:45.064 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:45.064 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:45.064 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:45.064 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:45.064 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:45.064 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.064 22:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.064 22:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.064 22:02:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:45.064 00:07:45.064 real 0m3.379s 00:07:45.064 user 0m11.384s 00:07:45.064 sys 0m0.769s 00:07:45.064 22:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.064 ************************************ 00:07:45.064 END TEST nvmf_multitarget 00:07:45.064 ************************************ 00:07:45.064 22:02:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:45.064 22:02:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:45.065 22:02:31 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:45.065 22:02:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:45.065 22:02:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.065 22:02:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.065 ************************************ 00:07:45.065 START TEST nvmf_rpc 00:07:45.065 ************************************ 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:45.065 * Looking for test storage... 00:07:45.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:45.065 22:02:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:45.065 Cannot find device "nvmf_tgt_br" 00:07:45.065 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:07:45.065 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:45.323 Cannot find device "nvmf_tgt_br2" 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:45.323 Cannot find device "nvmf_tgt_br" 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:45.323 Cannot find device "nvmf_tgt_br2" 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:45.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:45.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:45.323 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:45.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:07:45.582 00:07:45.582 --- 10.0.0.2 ping statistics --- 00:07:45.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.582 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:45.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:45.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:07:45.582 00:07:45.582 --- 10.0.0.3 ping statistics --- 00:07:45.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.582 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:45.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:07:45.582 00:07:45.582 --- 10.0.0.1 ping statistics --- 00:07:45.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.582 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=67267 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 67267 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 67267 ']' 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.582 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.582 [2024-07-15 22:02:32.424613] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:07:45.582 [2024-07-15 22:02:32.424742] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.840 [2024-07-15 22:02:32.569588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.840 [2024-07-15 22:02:32.652897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.840 [2024-07-15 22:02:32.652953] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.840 [2024-07-15 22:02:32.652964] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.840 [2024-07-15 22:02:32.652972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.840 [2024-07-15 22:02:32.652979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.840 [2024-07-15 22:02:32.653061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.840 [2024-07-15 22:02:32.653497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.840 [2024-07-15 22:02:32.653578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.840 [2024-07-15 22:02:32.653740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.840 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.840 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:45.840 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:45.840 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:45.840 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:46.099 "poll_groups": [ 00:07:46.099 { 00:07:46.099 "admin_qpairs": 0, 00:07:46.099 "completed_nvme_io": 0, 00:07:46.099 "current_admin_qpairs": 0, 00:07:46.099 "current_io_qpairs": 0, 00:07:46.099 "io_qpairs": 0, 00:07:46.099 "name": "nvmf_tgt_poll_group_000", 00:07:46.099 "pending_bdev_io": 0, 00:07:46.099 "transports": [] 00:07:46.099 }, 00:07:46.099 { 00:07:46.099 "admin_qpairs": 0, 00:07:46.099 "completed_nvme_io": 0, 00:07:46.099 "current_admin_qpairs": 0, 00:07:46.099 "current_io_qpairs": 0, 00:07:46.099 "io_qpairs": 0, 00:07:46.099 "name": "nvmf_tgt_poll_group_001", 00:07:46.099 "pending_bdev_io": 0, 00:07:46.099 "transports": [] 00:07:46.099 }, 00:07:46.099 { 00:07:46.099 "admin_qpairs": 0, 00:07:46.099 "completed_nvme_io": 0, 00:07:46.099 "current_admin_qpairs": 0, 00:07:46.099 "current_io_qpairs": 0, 00:07:46.099 "io_qpairs": 0, 00:07:46.099 "name": "nvmf_tgt_poll_group_002", 00:07:46.099 "pending_bdev_io": 0, 00:07:46.099 "transports": [] 00:07:46.099 }, 00:07:46.099 { 00:07:46.099 "admin_qpairs": 0, 00:07:46.099 "completed_nvme_io": 0, 00:07:46.099 "current_admin_qpairs": 0, 00:07:46.099 "current_io_qpairs": 0, 00:07:46.099 "io_qpairs": 0, 00:07:46.099 "name": "nvmf_tgt_poll_group_003", 00:07:46.099 "pending_bdev_io": 0, 00:07:46.099 "transports": [] 00:07:46.099 } 00:07:46.099 ], 00:07:46.099 "tick_rate": 2200000000 00:07:46.099 }' 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.099 [2024-07-15 22:02:32.940491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.099 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:46.099 "poll_groups": [ 00:07:46.099 { 00:07:46.099 "admin_qpairs": 0, 00:07:46.099 "completed_nvme_io": 0, 00:07:46.099 "current_admin_qpairs": 0, 00:07:46.099 "current_io_qpairs": 0, 00:07:46.099 "io_qpairs": 0, 00:07:46.099 "name": "nvmf_tgt_poll_group_000", 00:07:46.100 "pending_bdev_io": 0, 00:07:46.100 "transports": [ 00:07:46.100 { 00:07:46.100 "trtype": "TCP" 00:07:46.100 } 00:07:46.100 ] 00:07:46.100 }, 00:07:46.100 { 00:07:46.100 "admin_qpairs": 0, 00:07:46.100 "completed_nvme_io": 0, 00:07:46.100 "current_admin_qpairs": 0, 00:07:46.100 "current_io_qpairs": 0, 00:07:46.100 "io_qpairs": 0, 00:07:46.100 "name": "nvmf_tgt_poll_group_001", 00:07:46.100 "pending_bdev_io": 0, 00:07:46.100 "transports": [ 00:07:46.100 { 00:07:46.100 "trtype": "TCP" 00:07:46.100 } 00:07:46.100 ] 00:07:46.100 }, 00:07:46.100 { 00:07:46.100 "admin_qpairs": 0, 00:07:46.100 "completed_nvme_io": 0, 00:07:46.100 "current_admin_qpairs": 0, 00:07:46.100 "current_io_qpairs": 0, 00:07:46.100 "io_qpairs": 0, 00:07:46.100 "name": "nvmf_tgt_poll_group_002", 00:07:46.100 "pending_bdev_io": 0, 00:07:46.100 "transports": [ 00:07:46.100 { 00:07:46.100 "trtype": "TCP" 00:07:46.100 } 00:07:46.100 ] 00:07:46.100 }, 00:07:46.100 { 00:07:46.100 "admin_qpairs": 0, 00:07:46.100 "completed_nvme_io": 0, 00:07:46.100 "current_admin_qpairs": 0, 00:07:46.100 "current_io_qpairs": 0, 00:07:46.100 "io_qpairs": 0, 00:07:46.100 "name": "nvmf_tgt_poll_group_003", 00:07:46.100 "pending_bdev_io": 0, 00:07:46.100 "transports": [ 00:07:46.100 { 00:07:46.100 "trtype": "TCP" 00:07:46.100 } 00:07:46.100 ] 00:07:46.100 } 00:07:46.100 ], 00:07:46.100 "tick_rate": 2200000000 00:07:46.100 }' 00:07:46.100 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:46.100 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:46.100 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:46.100 22:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.359 Malloc1 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.359 [2024-07-15 22:02:33.155963] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -a 10.0.0.2 -s 4420 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -a 10.0.0.2 -s 4420 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -a 10.0.0.2 -s 4420 00:07:46.359 [2024-07-15 22:02:33.178003] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29' 00:07:46.359 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:46.359 could not add new controller: failed to write to nvme-fabrics device 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.359 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:46.617 22:02:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:46.617 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:46.617 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:46.617 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:46.617 22:02:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:48.514 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:48.514 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:48.514 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:48.514 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:48.515 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:48.515 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:48.515 22:02:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:48.772 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:48.773 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:48.773 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:48.773 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.773 [2024-07-15 22:02:35.569719] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29' 00:07:48.773 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:48.773 could not add new controller: failed to write to nvme-fabrics device 00:07:48.773 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:48.773 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:48.773 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:48.773 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:48.773 22:02:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:48.773 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.773 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.773 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.773 22:02:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:49.031 22:02:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:49.031 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:49.031 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:49.031 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:49.031 22:02:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:50.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.930 [2024-07-15 22:02:37.855389] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.930 22:02:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:51.188 22:02:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:51.188 22:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:51.188 22:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:51.188 22:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:51.188 22:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:53.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.718 [2024-07-15 22:02:40.150987] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:53.718 22:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:55.612 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:55.612 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:55.612 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:55.612 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:55.612 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:55.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.613 [2024-07-15 22:02:42.442271] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.613 22:02:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:55.869 22:02:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:55.869 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:55.869 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:55.869 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:55.869 22:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:57.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.779 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.090 [2024-07-15 22:02:44.737297] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:58.090 22:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:00.005 22:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:00.005 22:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:00.005 22:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:00.005 22:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:00.005 22:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:00.005 22:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:00.005 22:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:00.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.264 22:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:00.264 22:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:00.264 22:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:00.264 22:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.264 22:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:00.264 22:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.264 [2024-07-15 22:02:47.037598] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.264 22:02:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:00.522 22:02:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:00.522 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:00.522 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:00.522 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:00.522 22:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.422 [2024-07-15 22:02:49.340654] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.422 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.681 [2024-07-15 22:02:49.388676] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.681 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 [2024-07-15 22:02:49.436703] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 [2024-07-15 22:02:49.484754] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 [2024-07-15 22:02:49.532813] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.682 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:02.682 "poll_groups": [ 00:08:02.682 { 00:08:02.682 "admin_qpairs": 2, 00:08:02.682 "completed_nvme_io": 66, 00:08:02.682 "current_admin_qpairs": 0, 00:08:02.682 "current_io_qpairs": 0, 00:08:02.682 "io_qpairs": 16, 00:08:02.682 "name": "nvmf_tgt_poll_group_000", 00:08:02.682 "pending_bdev_io": 0, 00:08:02.682 "transports": [ 00:08:02.682 { 00:08:02.682 "trtype": "TCP" 00:08:02.682 } 00:08:02.682 ] 00:08:02.682 }, 00:08:02.682 { 00:08:02.682 "admin_qpairs": 3, 00:08:02.682 "completed_nvme_io": 67, 00:08:02.682 "current_admin_qpairs": 0, 00:08:02.682 "current_io_qpairs": 0, 00:08:02.682 "io_qpairs": 17, 00:08:02.682 "name": "nvmf_tgt_poll_group_001", 00:08:02.682 "pending_bdev_io": 0, 00:08:02.682 "transports": [ 00:08:02.682 { 00:08:02.682 "trtype": "TCP" 00:08:02.682 } 00:08:02.682 ] 00:08:02.682 }, 00:08:02.682 { 00:08:02.682 "admin_qpairs": 1, 00:08:02.682 "completed_nvme_io": 121, 00:08:02.682 "current_admin_qpairs": 0, 00:08:02.682 "current_io_qpairs": 0, 00:08:02.682 "io_qpairs": 19, 00:08:02.682 "name": "nvmf_tgt_poll_group_002", 00:08:02.682 "pending_bdev_io": 0, 00:08:02.682 "transports": [ 00:08:02.682 { 00:08:02.682 "trtype": "TCP" 00:08:02.682 } 00:08:02.682 ] 00:08:02.682 }, 00:08:02.682 { 00:08:02.682 "admin_qpairs": 1, 00:08:02.682 "completed_nvme_io": 166, 00:08:02.683 "current_admin_qpairs": 0, 00:08:02.683 "current_io_qpairs": 0, 00:08:02.683 "io_qpairs": 18, 00:08:02.683 "name": "nvmf_tgt_poll_group_003", 00:08:02.683 "pending_bdev_io": 0, 00:08:02.683 "transports": [ 00:08:02.683 { 00:08:02.683 "trtype": "TCP" 00:08:02.683 } 00:08:02.683 ] 00:08:02.683 } 00:08:02.683 ], 00:08:02.683 "tick_rate": 2200000000 00:08:02.683 }' 00:08:02.683 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:02.683 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:02.683 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:02.683 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:02.942 rmmod nvme_tcp 00:08:02.942 rmmod nvme_fabrics 00:08:02.942 rmmod nvme_keyring 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 67267 ']' 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 67267 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 67267 ']' 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 67267 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67267 00:08:02.942 killing process with pid 67267 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67267' 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 67267 00:08:02.942 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 67267 00:08:03.201 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:03.201 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:03.201 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:03.201 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.201 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:03.201 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.201 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.201 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.201 22:02:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:03.201 00:08:03.201 real 0m18.126s 00:08:03.201 user 1m7.252s 00:08:03.201 sys 0m2.868s 00:08:03.201 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.201 ************************************ 00:08:03.201 END TEST nvmf_rpc 00:08:03.201 22:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.201 ************************************ 00:08:03.201 22:02:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:03.201 22:02:50 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:03.201 22:02:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:03.201 22:02:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.201 22:02:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:03.201 ************************************ 00:08:03.201 START TEST nvmf_invalid 00:08:03.201 ************************************ 00:08:03.201 22:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:03.201 * Looking for test storage... 00:08:03.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:03.201 22:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:03.201 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:03.201 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.201 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.201 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.201 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.201 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.201 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.201 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.201 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:03.202 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:03.461 Cannot find device "nvmf_tgt_br" 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:03.461 Cannot find device "nvmf_tgt_br2" 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:03.461 Cannot find device "nvmf_tgt_br" 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:03.461 Cannot find device "nvmf_tgt_br2" 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:03.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:03.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:03.461 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:03.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:08:03.720 00:08:03.720 --- 10.0.0.2 ping statistics --- 00:08:03.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.720 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:03.720 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:03.720 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:08:03.720 00:08:03.720 --- 10.0.0.3 ping statistics --- 00:08:03.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.720 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:03.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:08:03.720 00:08:03.720 --- 10.0.0.1 ping statistics --- 00:08:03.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.720 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:03.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=67767 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 67767 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 67767 ']' 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.720 22:02:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:03.720 [2024-07-15 22:02:50.583690] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:08:03.720 [2024-07-15 22:02:50.583796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.978 [2024-07-15 22:02:50.721835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.978 [2024-07-15 22:02:50.795582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.978 [2024-07-15 22:02:50.795893] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.978 [2024-07-15 22:02:50.796054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.978 [2024-07-15 22:02:50.796314] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.978 [2024-07-15 22:02:50.796450] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.978 [2024-07-15 22:02:50.796710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.978 [2024-07-15 22:02:50.796769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.978 [2024-07-15 22:02:50.796838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.978 [2024-07-15 22:02:50.796845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.912 22:02:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:04.912 22:02:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:08:04.912 22:02:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:04.912 22:02:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:04.912 22:02:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:04.912 22:02:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.912 22:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:04.912 22:02:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22484 00:08:05.169 [2024-07-15 22:02:51.985679] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:05.169 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/15 22:02:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode22484 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:08:05.169 request: 00:08:05.169 { 00:08:05.169 "method": "nvmf_create_subsystem", 00:08:05.169 "params": { 00:08:05.169 "nqn": "nqn.2016-06.io.spdk:cnode22484", 00:08:05.169 "tgt_name": "foobar" 00:08:05.169 } 00:08:05.169 } 00:08:05.169 Got JSON-RPC error response 00:08:05.169 GoRPCClient: error on JSON-RPC call' 00:08:05.169 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/15 22:02:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode22484 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:08:05.169 request: 00:08:05.169 { 00:08:05.169 "method": "nvmf_create_subsystem", 00:08:05.169 "params": { 00:08:05.169 "nqn": "nqn.2016-06.io.spdk:cnode22484", 00:08:05.169 "tgt_name": "foobar" 00:08:05.169 } 00:08:05.169 } 00:08:05.169 Got JSON-RPC error response 00:08:05.169 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:05.169 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:05.169 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18634 00:08:05.427 [2024-07-15 22:02:52.258016] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18634: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:05.427 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/15 22:02:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18634 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:08:05.427 request: 00:08:05.427 { 00:08:05.427 "method": "nvmf_create_subsystem", 00:08:05.427 "params": { 00:08:05.427 "nqn": "nqn.2016-06.io.spdk:cnode18634", 00:08:05.427 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:08:05.427 } 00:08:05.427 } 00:08:05.427 Got JSON-RPC error response 00:08:05.427 GoRPCClient: error on JSON-RPC call' 00:08:05.427 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/15 22:02:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18634 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:08:05.427 request: 00:08:05.427 { 00:08:05.427 "method": "nvmf_create_subsystem", 00:08:05.427 "params": { 00:08:05.427 "nqn": "nqn.2016-06.io.spdk:cnode18634", 00:08:05.427 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:08:05.427 } 00:08:05.427 } 00:08:05.427 Got JSON-RPC error response 00:08:05.427 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:05.427 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:05.427 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27020 00:08:05.686 [2024-07-15 22:02:52.554208] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27020: invalid model number 'SPDK_Controller' 00:08:05.686 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/15 22:02:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode27020], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:08:05.686 request: 00:08:05.686 { 00:08:05.686 "method": "nvmf_create_subsystem", 00:08:05.686 "params": { 00:08:05.686 "nqn": "nqn.2016-06.io.spdk:cnode27020", 00:08:05.686 "model_number": "SPDK_Controller\u001f" 00:08:05.686 } 00:08:05.686 } 00:08:05.686 Got JSON-RPC error response 00:08:05.686 GoRPCClient: error on JSON-RPC call' 00:08:05.686 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/15 22:02:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode27020], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:08:05.686 request: 00:08:05.686 { 00:08:05.686 "method": "nvmf_create_subsystem", 00:08:05.686 "params": { 00:08:05.686 "nqn": "nqn.2016-06.io.spdk:cnode27020", 00:08:05.686 "model_number": "SPDK_Controller\u001f" 00:08:05.686 } 00:08:05.686 } 00:08:05.686 Got JSON-RPC error response 00:08:05.686 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:05.686 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:05.686 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:05.686 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:05.686 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:05.686 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:05.686 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.687 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:08:05.945 22:02:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '^0-r?[*ldhGr=<* /dev/null' 00:08:09.895 22:02:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.895 22:02:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:09.895 ************************************ 00:08:09.895 END TEST nvmf_invalid 00:08:09.895 ************************************ 00:08:09.895 00:08:09.895 real 0m6.699s 00:08:09.895 user 0m27.654s 00:08:09.895 sys 0m1.243s 00:08:09.895 22:02:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.895 22:02:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:09.895 22:02:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:09.895 22:02:56 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:09.895 22:02:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:09.895 22:02:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.895 22:02:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:09.895 ************************************ 00:08:09.895 START TEST nvmf_abort 00:08:09.895 ************************************ 00:08:09.895 22:02:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:10.156 * Looking for test storage... 00:08:10.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:10.156 Cannot find device "nvmf_tgt_br" 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:10.156 Cannot find device "nvmf_tgt_br2" 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:10.156 Cannot find device "nvmf_tgt_br" 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:10.156 Cannot find device "nvmf_tgt_br2" 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:10.156 22:02:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:10.157 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:10.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.157 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:08:10.157 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:10.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.157 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:08:10.157 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:10.157 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:10.157 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:10.157 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:10.157 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:10.157 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:10.157 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:10.157 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:10.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:08:10.428 00:08:10.428 --- 10.0.0.2 ping statistics --- 00:08:10.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.428 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:10.428 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:10.428 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:08:10.428 00:08:10.428 --- 10.0.0.3 ping statistics --- 00:08:10.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.428 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:10.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:08:10.428 00:08:10.428 --- 10.0.0.1 ping statistics --- 00:08:10.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.428 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=68283 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 68283 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 68283 ']' 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.428 22:02:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:10.428 [2024-07-15 22:02:57.344320] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:08:10.428 [2024-07-15 22:02:57.344464] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.686 [2024-07-15 22:02:57.487112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.686 [2024-07-15 22:02:57.549768] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.686 [2024-07-15 22:02:57.549857] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.686 [2024-07-15 22:02:57.549877] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.686 [2024-07-15 22:02:57.549891] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.686 [2024-07-15 22:02:57.549903] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.686 [2024-07-15 22:02:57.550174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.686 [2024-07-15 22:02:57.550366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.686 [2024-07-15 22:02:57.550374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.621 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.621 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:11.621 22:02:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:11.621 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:11.621 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:11.621 22:02:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.621 22:02:58 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:11.621 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.621 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:11.621 [2024-07-15 22:02:58.431323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.621 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:11.622 Malloc0 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:11.622 Delay0 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:11.622 [2024-07-15 22:02:58.500073] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.622 22:02:58 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:11.880 [2024-07-15 22:02:58.669954] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:13.781 Initializing NVMe Controllers 00:08:13.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:13.781 controller IO queue size 128 less than required 00:08:13.781 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:13.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:13.781 Initialization complete. Launching workers. 00:08:13.781 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28995 00:08:13.781 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29056, failed to submit 62 00:08:13.781 success 28999, unsuccess 57, failed 0 00:08:13.781 22:03:00 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:13.781 22:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.781 22:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:13.781 22:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.781 22:03:00 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:13.781 22:03:00 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:13.781 22:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.781 22:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.039 rmmod nvme_tcp 00:08:14.039 rmmod nvme_fabrics 00:08:14.039 rmmod nvme_keyring 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 68283 ']' 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 68283 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 68283 ']' 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 68283 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68283 00:08:14.039 killing process with pid 68283 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68283' 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 68283 00:08:14.039 22:03:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 68283 00:08:14.298 22:03:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.298 22:03:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.298 22:03:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.298 22:03:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.298 22:03:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.298 22:03:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.298 22:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.298 22:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.298 22:03:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:14.298 ************************************ 00:08:14.298 END TEST nvmf_abort 00:08:14.298 ************************************ 00:08:14.298 00:08:14.298 real 0m4.272s 00:08:14.298 user 0m12.208s 00:08:14.298 sys 0m1.080s 00:08:14.298 22:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.298 22:03:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:14.298 22:03:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:14.298 22:03:01 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:14.298 22:03:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:14.298 22:03:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.298 22:03:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:14.298 ************************************ 00:08:14.298 START TEST nvmf_ns_hotplug_stress 00:08:14.298 ************************************ 00:08:14.298 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:14.298 * Looking for test storage... 00:08:14.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:14.299 Cannot find device "nvmf_tgt_br" 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.299 Cannot find device "nvmf_tgt_br2" 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:08:14.299 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:14.558 Cannot find device "nvmf_tgt_br" 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:14.558 Cannot find device "nvmf_tgt_br2" 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.558 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.558 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:14.558 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:14.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:08:14.816 00:08:14.816 --- 10.0.0.2 ping statistics --- 00:08:14.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.816 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:14.816 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:14.816 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:08:14.816 00:08:14.816 --- 10.0.0.3 ping statistics --- 00:08:14.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.816 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:14.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:08:14.816 00:08:14.816 --- 10.0.0.1 ping statistics --- 00:08:14.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.816 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:14.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=68544 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 68544 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 68544 ']' 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.816 22:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:14.816 [2024-07-15 22:03:01.614822] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:08:14.816 [2024-07-15 22:03:01.614919] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.816 [2024-07-15 22:03:01.751069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:15.074 [2024-07-15 22:03:01.824474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.074 [2024-07-15 22:03:01.824805] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.074 [2024-07-15 22:03:01.825032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.074 [2024-07-15 22:03:01.825259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.074 [2024-07-15 22:03:01.825410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.074 [2024-07-15 22:03:01.825649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.074 [2024-07-15 22:03:01.825717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.074 [2024-07-15 22:03:01.825721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.007 22:03:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.007 22:03:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:16.007 22:03:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:16.007 22:03:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.007 22:03:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.007 22:03:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.007 22:03:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:16.007 22:03:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:16.265 [2024-07-15 22:03:03.066829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.265 22:03:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:16.861 22:03:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.861 [2024-07-15 22:03:03.763632] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.861 22:03:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.425 22:03:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:17.683 Malloc0 00:08:17.683 22:03:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:17.940 Delay0 00:08:17.941 22:03:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.198 22:03:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:18.456 NULL1 00:08:18.456 22:03:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:19.022 22:03:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68686 00:08:19.022 22:03:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:19.022 22:03:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:19.022 22:03:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.394 Read completed with error (sct=0, sc=11) 00:08:20.394 22:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.650 22:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:20.650 22:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:20.907 true 00:08:20.907 22:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:20.907 22:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.471 22:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.036 22:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:22.036 22:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:22.293 true 00:08:22.293 22:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:22.293 22:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.666 22:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.958 22:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:23.958 22:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:24.216 true 00:08:24.216 22:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:24.216 22:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.780 22:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.398 22:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:25.398 22:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:25.656 true 00:08:25.656 22:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:25.656 22:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.026 22:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.283 22:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:27.283 22:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:27.541 true 00:08:27.541 22:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:27.541 22:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.474 22:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:28.732 22:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:28.732 22:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:28.989 true 00:08:28.989 22:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:28.989 22:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.923 22:03:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.923 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.181 22:03:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:30.181 22:03:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:30.439 true 00:08:30.439 22:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:30.439 22:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.005 22:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.263 22:03:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:31.263 22:03:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:31.521 true 00:08:31.521 22:03:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:31.521 22:03:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.897 22:03:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.897 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.154 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.154 22:03:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:33.154 22:03:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:33.721 true 00:08:33.721 22:03:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:33.721 22:03:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.286 22:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.544 22:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:34.544 22:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:34.802 true 00:08:34.802 22:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:34.802 22:03:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.744 22:03:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.745 22:03:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:35.745 22:03:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:36.002 true 00:08:36.002 22:03:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:36.002 22:03:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.940 22:03:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.198 22:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:37.198 22:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:37.455 true 00:08:37.456 22:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:37.456 22:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.713 22:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.278 22:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:38.278 22:03:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:38.278 true 00:08:38.278 22:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:38.278 22:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.844 22:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.844 22:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:38.844 22:03:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:39.410 true 00:08:39.410 22:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:39.410 22:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.667 22:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.924 22:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:39.924 22:03:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:40.181 true 00:08:40.439 22:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:40.439 22:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.439 22:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.005 22:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:41.005 22:03:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:41.264 true 00:08:41.264 22:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:41.264 22:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.830 22:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.088 22:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:42.089 22:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:42.677 true 00:08:42.677 22:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:42.677 22:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.935 22:03:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.193 22:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:43.193 22:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:43.451 true 00:08:43.451 22:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:43.451 22:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.710 22:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.968 22:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:43.968 22:03:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:44.227 true 00:08:44.227 22:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:44.227 22:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.793 22:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.051 22:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:45.052 22:03:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:45.310 true 00:08:45.310 22:03:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:45.310 22:03:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.569 22:03:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.826 22:03:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:45.826 22:03:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:46.393 true 00:08:46.393 22:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:46.393 22:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.957 22:03:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.523 22:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:47.523 22:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:47.781 true 00:08:47.781 22:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:47.781 22:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.071 22:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.330 Initializing NVMe Controllers 00:08:49.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:49.330 Controller IO queue size 128, less than required. 00:08:49.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:49.330 Controller IO queue size 128, less than required. 00:08:49.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:49.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:49.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:49.330 Initialization complete. Launching workers. 00:08:49.330 ======================================================== 00:08:49.330 Latency(us) 00:08:49.330 Device Information : IOPS MiB/s Average min max 00:08:49.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2153.77 1.05 34413.60 2812.58 1049900.06 00:08:49.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10707.63 5.23 11953.72 3561.25 697453.28 00:08:49.330 ======================================================== 00:08:49.330 Total : 12861.40 6.28 15714.85 2812.58 1049900.06 00:08:49.330 00:08:49.330 22:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:49.330 22:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:49.589 true 00:08:49.589 22:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68686 00:08:49.589 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68686) - No such process 00:08:49.589 22:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68686 00:08:49.589 22:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.848 22:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:50.414 22:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:50.414 22:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:50.414 22:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:50.414 22:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:50.414 22:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:50.414 null0 00:08:50.672 22:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:50.672 22:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:50.672 22:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:50.932 null1 00:08:50.932 22:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:50.932 22:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:50.932 22:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:51.190 null2 00:08:51.190 22:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:51.190 22:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:51.190 22:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:51.449 null3 00:08:51.449 22:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:51.449 22:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:51.449 22:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:51.708 null4 00:08:51.708 22:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:51.708 22:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:51.708 22:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:51.966 null5 00:08:51.966 22:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:51.966 22:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:51.966 22:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:52.224 null6 00:08:52.483 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:52.483 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:52.483 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:52.742 null7 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69603 69604 69606 69607 69610 69612 69614 69617 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:52.742 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:53.000 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:53.000 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:53.000 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:53.000 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.000 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:53.000 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:53.000 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.001 22:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.258 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:53.516 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:53.516 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.516 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.516 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:53.516 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.516 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.516 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:53.516 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:53.516 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.516 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:53.774 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:53.774 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:53.774 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:53.774 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:53.774 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:54.031 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.031 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.031 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:54.031 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.031 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.031 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:54.031 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.031 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.031 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:54.031 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.031 22:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:54.289 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:54.289 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.289 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.289 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:54.289 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.289 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.289 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:54.289 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:54.289 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:54.548 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.548 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.548 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:54.548 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:54.548 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.548 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.548 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:54.548 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.548 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.548 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:54.548 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.548 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.805 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:54.805 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:54.805 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:54.805 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:54.805 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:54.805 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:54.805 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.064 22:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:55.322 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:55.322 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:55.322 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.580 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:55.838 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.838 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.838 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:55.838 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:55.838 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.838 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.838 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:55.838 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:55.838 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:55.838 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:55.838 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:55.838 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:55.838 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:56.096 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:56.096 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:56.096 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.096 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.096 22:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:56.354 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.354 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.354 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:56.354 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:56.354 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.354 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.354 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:56.354 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.354 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.354 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:56.354 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.354 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.354 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.354 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:56.612 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.612 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.612 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:56.612 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:56.612 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:56.612 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:56.612 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:56.869 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:56.869 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:56.869 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:56.869 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:57.125 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.125 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.125 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:57.125 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:57.125 22:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:57.125 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.125 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.125 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.383 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:57.640 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.640 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.640 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:57.640 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:57.640 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:57.640 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:57.897 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:57.897 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:57.897 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:57.897 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:57.897 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:57.897 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:57.897 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:58.155 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.155 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.155 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:58.155 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.155 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.155 22:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:58.155 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.155 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.155 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:58.155 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.155 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.155 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:58.412 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.412 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.412 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:58.412 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.412 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.412 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:58.412 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.412 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.412 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:58.669 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.669 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:58.669 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:58.669 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:58.669 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:58.669 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:58.927 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:58.927 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:58.927 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.927 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.927 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:58.927 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:58.927 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:58.927 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:59.184 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.184 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.184 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:59.184 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.184 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.184 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:59.184 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.184 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.184 22:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:59.184 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.184 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.184 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:59.184 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:59.483 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.483 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.483 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:59.483 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.483 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.483 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:59.483 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.483 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:59.742 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:59.742 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:59.742 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:59.742 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:59.742 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:59.742 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:59.742 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:59.742 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:00.000 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.001 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.001 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:00.001 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.001 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.001 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:00.001 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.001 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.001 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:00.001 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.001 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.001 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:00.001 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.001 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.001 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:00.258 22:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:00.258 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.258 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.258 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:00.258 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.258 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.258 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:00.258 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.516 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:00.516 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:00.516 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:00.516 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:00.516 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.516 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.516 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:00.774 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:00.774 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:00.774 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.774 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.774 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:00.774 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.774 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:00.774 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:00.774 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.032 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.032 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.032 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.032 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.032 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.032 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.032 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:01.032 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.032 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.032 22:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.290 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.290 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.548 rmmod nvme_tcp 00:09:01.548 rmmod nvme_fabrics 00:09:01.548 rmmod nvme_keyring 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 68544 ']' 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 68544 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 68544 ']' 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 68544 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68544 00:09:01.548 killing process with pid 68544 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68544' 00:09:01.548 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 68544 00:09:01.549 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 68544 00:09:01.808 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.808 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:01.808 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:01.808 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.808 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.808 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.808 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.808 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.808 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:01.808 00:09:01.808 real 0m47.481s 00:09:01.808 user 3m58.715s 00:09:01.808 sys 0m15.057s 00:09:01.808 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.808 22:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.808 ************************************ 00:09:01.808 END TEST nvmf_ns_hotplug_stress 00:09:01.808 ************************************ 00:09:01.808 22:03:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:01.808 22:03:48 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:01.808 22:03:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.808 22:03:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.808 22:03:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.808 ************************************ 00:09:01.808 START TEST nvmf_connect_stress 00:09:01.808 ************************************ 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:01.808 * Looking for test storage... 00:09:01.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:01.808 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:02.067 Cannot find device "nvmf_tgt_br" 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:02.067 Cannot find device "nvmf_tgt_br2" 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:02.067 Cannot find device "nvmf_tgt_br" 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:02.067 Cannot find device "nvmf_tgt_br2" 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:02.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:02.067 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:02.067 22:03:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:02.067 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:02.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:09:02.326 00:09:02.326 --- 10.0.0.2 ping statistics --- 00:09:02.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.326 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:02.326 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:02.326 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:09:02.326 00:09:02.326 --- 10.0.0.3 ping statistics --- 00:09:02.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.326 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:02.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:02.326 00:09:02.326 --- 10.0.0.1 ping statistics --- 00:09:02.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.326 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=70954 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 70954 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 70954 ']' 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.326 22:03:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.326 [2024-07-15 22:03:49.208430] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:09:02.326 [2024-07-15 22:03:49.208561] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.584 [2024-07-15 22:03:49.352256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:02.584 [2024-07-15 22:03:49.436726] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.584 [2024-07-15 22:03:49.436821] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.584 [2024-07-15 22:03:49.436839] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.584 [2024-07-15 22:03:49.436853] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.584 [2024-07-15 22:03:49.436865] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.584 [2024-07-15 22:03:49.436993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.584 [2024-07-15 22:03:49.438058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.584 [2024-07-15 22:03:49.438098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.518 [2024-07-15 22:03:50.238239] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.518 [2024-07-15 22:03:50.263535] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.518 NULL1 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=71012 00:09:03.518 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.519 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:03.777 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.777 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:03.777 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:03.777 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.777 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.344 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.344 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:04.344 22:03:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.344 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.344 22:03:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.603 22:03:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.603 22:03:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:04.603 22:03:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.603 22:03:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.603 22:03:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.860 22:03:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.860 22:03:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:04.860 22:03:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:04.860 22:03:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.860 22:03:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.116 22:03:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.116 22:03:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:05.116 22:03:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.116 22:03:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.116 22:03:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.373 22:03:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.373 22:03:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:05.373 22:03:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.373 22:03:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.373 22:03:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:05.941 22:03:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.941 22:03:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:05.941 22:03:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.941 22:03:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.941 22:03:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.199 22:03:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.199 22:03:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:06.199 22:03:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.199 22:03:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.199 22:03:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.463 22:03:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.463 22:03:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:06.463 22:03:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.463 22:03:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.463 22:03:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.725 22:03:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.725 22:03:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:06.725 22:03:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.725 22:03:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.725 22:03:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:06.983 22:03:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.983 22:03:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:06.983 22:03:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.983 22:03:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.983 22:03:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.547 22:03:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.547 22:03:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:07.547 22:03:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.547 22:03:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.547 22:03:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:07.805 22:03:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.805 22:03:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:07.805 22:03:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.805 22:03:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.805 22:03:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.065 22:03:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.065 22:03:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:08.065 22:03:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.065 22:03:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.065 22:03:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.323 22:03:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.323 22:03:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:08.323 22:03:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.323 22:03:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.323 22:03:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:08.580 22:03:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.580 22:03:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:08.580 22:03:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.580 22:03:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.580 22:03:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.144 22:03:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.144 22:03:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:09.144 22:03:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.144 22:03:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.144 22:03:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.402 22:03:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.402 22:03:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:09.402 22:03:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.402 22:03:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.402 22:03:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.660 22:03:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.660 22:03:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:09.660 22:03:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.660 22:03:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.660 22:03:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:09.918 22:03:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.918 22:03:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:09.918 22:03:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.918 22:03:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.918 22:03:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.176 22:03:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.176 22:03:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:10.176 22:03:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.176 22:03:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.176 22:03:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.741 22:03:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.741 22:03:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:10.741 22:03:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.741 22:03:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.741 22:03:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:10.998 22:03:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.998 22:03:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:10.998 22:03:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.998 22:03:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.998 22:03:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.255 22:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.255 22:03:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:11.255 22:03:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.255 22:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.255 22:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.514 22:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.514 22:03:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:11.514 22:03:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.514 22:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.514 22:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:11.772 22:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.772 22:03:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:11.772 22:03:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.772 22:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.772 22:03:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.364 22:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.364 22:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:12.364 22:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.364 22:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.364 22:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.622 22:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.622 22:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:12.622 22:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.622 22:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.622 22:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.881 22:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.881 22:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:12.881 22:03:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.881 22:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.881 22:03:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.138 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.138 22:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:13.138 22:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.138 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.138 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.397 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.397 22:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:13.397 22:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.397 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.397 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:13.654 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:09:13.910 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71012) - No such process 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 71012 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:13.910 rmmod nvme_tcp 00:09:13.910 rmmod nvme_fabrics 00:09:13.910 rmmod nvme_keyring 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 70954 ']' 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 70954 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 70954 ']' 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 70954 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:09:13.910 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.911 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70954 00:09:13.911 killing process with pid 70954 00:09:13.911 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:13.911 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:13.911 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70954' 00:09:13.911 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 70954 00:09:13.911 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 70954 00:09:14.167 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:14.167 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:14.167 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:14.167 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:14.167 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:14.167 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.167 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.167 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.167 22:04:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:14.167 ************************************ 00:09:14.167 END TEST nvmf_connect_stress 00:09:14.167 ************************************ 00:09:14.167 00:09:14.167 real 0m12.341s 00:09:14.167 user 0m40.486s 00:09:14.167 sys 0m3.602s 00:09:14.167 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.167 22:04:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:14.167 22:04:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:14.167 22:04:01 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:14.167 22:04:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:14.167 22:04:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.167 22:04:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:14.167 ************************************ 00:09:14.167 START TEST nvmf_fused_ordering 00:09:14.167 ************************************ 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:14.167 * Looking for test storage... 00:09:14.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.167 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:14.168 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.168 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:14.424 Cannot find device "nvmf_tgt_br" 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:14.424 Cannot find device "nvmf_tgt_br2" 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:14.424 Cannot find device "nvmf_tgt_br" 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:14.424 Cannot find device "nvmf_tgt_br2" 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:14.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:14.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:14.424 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:14.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:09:14.682 00:09:14.682 --- 10.0.0.2 ping statistics --- 00:09:14.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.682 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:14.682 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:14.682 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:09:14.682 00:09:14.682 --- 10.0.0.3 ping statistics --- 00:09:14.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.682 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:14.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:14.682 00:09:14.682 --- 10.0.0.1 ping statistics --- 00:09:14.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.682 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=71332 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:14.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 71332 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 71332 ']' 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.682 22:04:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:14.682 [2024-07-15 22:04:01.554855] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:09:14.682 [2024-07-15 22:04:01.555226] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.940 [2024-07-15 22:04:01.695454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.940 [2024-07-15 22:04:01.754730] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.940 [2024-07-15 22:04:01.755005] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.940 [2024-07-15 22:04:01.755158] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.940 [2024-07-15 22:04:01.755295] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.940 [2024-07-15 22:04:01.755370] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.940 [2024-07-15 22:04:01.755526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.873 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.873 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:15.873 22:04:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.873 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.873 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:15.873 22:04:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.873 22:04:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.873 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.873 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:15.874 [2024-07-15 22:04:02.636278] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:15.874 [2024-07-15 22:04:02.652392] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:15.874 NULL1 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.874 22:04:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:15.874 [2024-07-15 22:04:02.707884] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:09:15.874 [2024-07-15 22:04:02.707966] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71382 ] 00:09:16.438 Attached to nqn.2016-06.io.spdk:cnode1 00:09:16.438 Namespace ID: 1 size: 1GB 00:09:16.438 fused_ordering(0) 00:09:16.438 fused_ordering(1) 00:09:16.438 fused_ordering(2) 00:09:16.438 fused_ordering(3) 00:09:16.438 fused_ordering(4) 00:09:16.438 fused_ordering(5) 00:09:16.438 fused_ordering(6) 00:09:16.438 fused_ordering(7) 00:09:16.438 fused_ordering(8) 00:09:16.438 fused_ordering(9) 00:09:16.438 fused_ordering(10) 00:09:16.438 fused_ordering(11) 00:09:16.438 fused_ordering(12) 00:09:16.438 fused_ordering(13) 00:09:16.438 fused_ordering(14) 00:09:16.438 fused_ordering(15) 00:09:16.438 fused_ordering(16) 00:09:16.438 fused_ordering(17) 00:09:16.438 fused_ordering(18) 00:09:16.438 fused_ordering(19) 00:09:16.438 fused_ordering(20) 00:09:16.438 fused_ordering(21) 00:09:16.438 fused_ordering(22) 00:09:16.438 fused_ordering(23) 00:09:16.438 fused_ordering(24) 00:09:16.438 fused_ordering(25) 00:09:16.438 fused_ordering(26) 00:09:16.438 fused_ordering(27) 00:09:16.438 fused_ordering(28) 00:09:16.438 fused_ordering(29) 00:09:16.438 fused_ordering(30) 00:09:16.438 fused_ordering(31) 00:09:16.438 fused_ordering(32) 00:09:16.438 fused_ordering(33) 00:09:16.438 fused_ordering(34) 00:09:16.438 fused_ordering(35) 00:09:16.438 fused_ordering(36) 00:09:16.438 fused_ordering(37) 00:09:16.438 fused_ordering(38) 00:09:16.438 fused_ordering(39) 00:09:16.438 fused_ordering(40) 00:09:16.438 fused_ordering(41) 00:09:16.438 fused_ordering(42) 00:09:16.438 fused_ordering(43) 00:09:16.438 fused_ordering(44) 00:09:16.438 fused_ordering(45) 00:09:16.438 fused_ordering(46) 00:09:16.438 fused_ordering(47) 00:09:16.438 fused_ordering(48) 00:09:16.438 fused_ordering(49) 00:09:16.438 fused_ordering(50) 00:09:16.438 fused_ordering(51) 00:09:16.438 fused_ordering(52) 00:09:16.438 fused_ordering(53) 00:09:16.438 fused_ordering(54) 00:09:16.438 fused_ordering(55) 00:09:16.438 fused_ordering(56) 00:09:16.438 fused_ordering(57) 00:09:16.438 fused_ordering(58) 00:09:16.438 fused_ordering(59) 00:09:16.438 fused_ordering(60) 00:09:16.438 fused_ordering(61) 00:09:16.438 fused_ordering(62) 00:09:16.438 fused_ordering(63) 00:09:16.438 fused_ordering(64) 00:09:16.438 fused_ordering(65) 00:09:16.438 fused_ordering(66) 00:09:16.438 fused_ordering(67) 00:09:16.438 fused_ordering(68) 00:09:16.438 fused_ordering(69) 00:09:16.438 fused_ordering(70) 00:09:16.438 fused_ordering(71) 00:09:16.438 fused_ordering(72) 00:09:16.438 fused_ordering(73) 00:09:16.438 fused_ordering(74) 00:09:16.438 fused_ordering(75) 00:09:16.438 fused_ordering(76) 00:09:16.438 fused_ordering(77) 00:09:16.438 fused_ordering(78) 00:09:16.438 fused_ordering(79) 00:09:16.438 fused_ordering(80) 00:09:16.438 fused_ordering(81) 00:09:16.438 fused_ordering(82) 00:09:16.438 fused_ordering(83) 00:09:16.438 fused_ordering(84) 00:09:16.438 fused_ordering(85) 00:09:16.438 fused_ordering(86) 00:09:16.438 fused_ordering(87) 00:09:16.438 fused_ordering(88) 00:09:16.438 fused_ordering(89) 00:09:16.438 fused_ordering(90) 00:09:16.438 fused_ordering(91) 00:09:16.438 fused_ordering(92) 00:09:16.438 fused_ordering(93) 00:09:16.438 fused_ordering(94) 00:09:16.438 fused_ordering(95) 00:09:16.438 fused_ordering(96) 00:09:16.438 fused_ordering(97) 00:09:16.438 fused_ordering(98) 00:09:16.438 fused_ordering(99) 00:09:16.438 fused_ordering(100) 00:09:16.438 fused_ordering(101) 00:09:16.438 fused_ordering(102) 00:09:16.438 fused_ordering(103) 00:09:16.438 fused_ordering(104) 00:09:16.439 fused_ordering(105) 00:09:16.439 fused_ordering(106) 00:09:16.439 fused_ordering(107) 00:09:16.439 fused_ordering(108) 00:09:16.439 fused_ordering(109) 00:09:16.439 fused_ordering(110) 00:09:16.439 fused_ordering(111) 00:09:16.439 fused_ordering(112) 00:09:16.439 fused_ordering(113) 00:09:16.439 fused_ordering(114) 00:09:16.439 fused_ordering(115) 00:09:16.439 fused_ordering(116) 00:09:16.439 fused_ordering(117) 00:09:16.439 fused_ordering(118) 00:09:16.439 fused_ordering(119) 00:09:16.439 fused_ordering(120) 00:09:16.439 fused_ordering(121) 00:09:16.439 fused_ordering(122) 00:09:16.439 fused_ordering(123) 00:09:16.439 fused_ordering(124) 00:09:16.439 fused_ordering(125) 00:09:16.439 fused_ordering(126) 00:09:16.439 fused_ordering(127) 00:09:16.439 fused_ordering(128) 00:09:16.439 fused_ordering(129) 00:09:16.439 fused_ordering(130) 00:09:16.439 fused_ordering(131) 00:09:16.439 fused_ordering(132) 00:09:16.439 fused_ordering(133) 00:09:16.439 fused_ordering(134) 00:09:16.439 fused_ordering(135) 00:09:16.439 fused_ordering(136) 00:09:16.439 fused_ordering(137) 00:09:16.439 fused_ordering(138) 00:09:16.439 fused_ordering(139) 00:09:16.439 fused_ordering(140) 00:09:16.439 fused_ordering(141) 00:09:16.439 fused_ordering(142) 00:09:16.439 fused_ordering(143) 00:09:16.439 fused_ordering(144) 00:09:16.439 fused_ordering(145) 00:09:16.439 fused_ordering(146) 00:09:16.439 fused_ordering(147) 00:09:16.439 fused_ordering(148) 00:09:16.439 fused_ordering(149) 00:09:16.439 fused_ordering(150) 00:09:16.439 fused_ordering(151) 00:09:16.439 fused_ordering(152) 00:09:16.439 fused_ordering(153) 00:09:16.439 fused_ordering(154) 00:09:16.439 fused_ordering(155) 00:09:16.439 fused_ordering(156) 00:09:16.439 fused_ordering(157) 00:09:16.439 fused_ordering(158) 00:09:16.439 fused_ordering(159) 00:09:16.439 fused_ordering(160) 00:09:16.439 fused_ordering(161) 00:09:16.439 fused_ordering(162) 00:09:16.439 fused_ordering(163) 00:09:16.439 fused_ordering(164) 00:09:16.439 fused_ordering(165) 00:09:16.439 fused_ordering(166) 00:09:16.439 fused_ordering(167) 00:09:16.439 fused_ordering(168) 00:09:16.439 fused_ordering(169) 00:09:16.439 fused_ordering(170) 00:09:16.439 fused_ordering(171) 00:09:16.439 fused_ordering(172) 00:09:16.439 fused_ordering(173) 00:09:16.439 fused_ordering(174) 00:09:16.439 fused_ordering(175) 00:09:16.439 fused_ordering(176) 00:09:16.439 fused_ordering(177) 00:09:16.439 fused_ordering(178) 00:09:16.439 fused_ordering(179) 00:09:16.439 fused_ordering(180) 00:09:16.439 fused_ordering(181) 00:09:16.439 fused_ordering(182) 00:09:16.439 fused_ordering(183) 00:09:16.439 fused_ordering(184) 00:09:16.439 fused_ordering(185) 00:09:16.439 fused_ordering(186) 00:09:16.439 fused_ordering(187) 00:09:16.439 fused_ordering(188) 00:09:16.439 fused_ordering(189) 00:09:16.439 fused_ordering(190) 00:09:16.439 fused_ordering(191) 00:09:16.439 fused_ordering(192) 00:09:16.439 fused_ordering(193) 00:09:16.439 fused_ordering(194) 00:09:16.439 fused_ordering(195) 00:09:16.439 fused_ordering(196) 00:09:16.439 fused_ordering(197) 00:09:16.439 fused_ordering(198) 00:09:16.439 fused_ordering(199) 00:09:16.439 fused_ordering(200) 00:09:16.439 fused_ordering(201) 00:09:16.439 fused_ordering(202) 00:09:16.439 fused_ordering(203) 00:09:16.439 fused_ordering(204) 00:09:16.439 fused_ordering(205) 00:09:16.696 fused_ordering(206) 00:09:16.696 fused_ordering(207) 00:09:16.696 fused_ordering(208) 00:09:16.696 fused_ordering(209) 00:09:16.696 fused_ordering(210) 00:09:16.696 fused_ordering(211) 00:09:16.696 fused_ordering(212) 00:09:16.696 fused_ordering(213) 00:09:16.696 fused_ordering(214) 00:09:16.696 fused_ordering(215) 00:09:16.696 fused_ordering(216) 00:09:16.696 fused_ordering(217) 00:09:16.696 fused_ordering(218) 00:09:16.696 fused_ordering(219) 00:09:16.696 fused_ordering(220) 00:09:16.696 fused_ordering(221) 00:09:16.696 fused_ordering(222) 00:09:16.696 fused_ordering(223) 00:09:16.696 fused_ordering(224) 00:09:16.696 fused_ordering(225) 00:09:16.696 fused_ordering(226) 00:09:16.696 fused_ordering(227) 00:09:16.696 fused_ordering(228) 00:09:16.696 fused_ordering(229) 00:09:16.696 fused_ordering(230) 00:09:16.696 fused_ordering(231) 00:09:16.696 fused_ordering(232) 00:09:16.696 fused_ordering(233) 00:09:16.696 fused_ordering(234) 00:09:16.696 fused_ordering(235) 00:09:16.696 fused_ordering(236) 00:09:16.696 fused_ordering(237) 00:09:16.696 fused_ordering(238) 00:09:16.696 fused_ordering(239) 00:09:16.696 fused_ordering(240) 00:09:16.696 fused_ordering(241) 00:09:16.696 fused_ordering(242) 00:09:16.696 fused_ordering(243) 00:09:16.696 fused_ordering(244) 00:09:16.696 fused_ordering(245) 00:09:16.696 fused_ordering(246) 00:09:16.696 fused_ordering(247) 00:09:16.696 fused_ordering(248) 00:09:16.696 fused_ordering(249) 00:09:16.696 fused_ordering(250) 00:09:16.696 fused_ordering(251) 00:09:16.696 fused_ordering(252) 00:09:16.696 fused_ordering(253) 00:09:16.696 fused_ordering(254) 00:09:16.697 fused_ordering(255) 00:09:16.697 fused_ordering(256) 00:09:16.697 fused_ordering(257) 00:09:16.697 fused_ordering(258) 00:09:16.697 fused_ordering(259) 00:09:16.697 fused_ordering(260) 00:09:16.697 fused_ordering(261) 00:09:16.697 fused_ordering(262) 00:09:16.697 fused_ordering(263) 00:09:16.697 fused_ordering(264) 00:09:16.697 fused_ordering(265) 00:09:16.697 fused_ordering(266) 00:09:16.697 fused_ordering(267) 00:09:16.697 fused_ordering(268) 00:09:16.697 fused_ordering(269) 00:09:16.697 fused_ordering(270) 00:09:16.697 fused_ordering(271) 00:09:16.697 fused_ordering(272) 00:09:16.697 fused_ordering(273) 00:09:16.697 fused_ordering(274) 00:09:16.697 fused_ordering(275) 00:09:16.697 fused_ordering(276) 00:09:16.697 fused_ordering(277) 00:09:16.697 fused_ordering(278) 00:09:16.697 fused_ordering(279) 00:09:16.697 fused_ordering(280) 00:09:16.697 fused_ordering(281) 00:09:16.697 fused_ordering(282) 00:09:16.697 fused_ordering(283) 00:09:16.697 fused_ordering(284) 00:09:16.697 fused_ordering(285) 00:09:16.697 fused_ordering(286) 00:09:16.697 fused_ordering(287) 00:09:16.697 fused_ordering(288) 00:09:16.697 fused_ordering(289) 00:09:16.697 fused_ordering(290) 00:09:16.697 fused_ordering(291) 00:09:16.697 fused_ordering(292) 00:09:16.697 fused_ordering(293) 00:09:16.697 fused_ordering(294) 00:09:16.697 fused_ordering(295) 00:09:16.697 fused_ordering(296) 00:09:16.697 fused_ordering(297) 00:09:16.697 fused_ordering(298) 00:09:16.697 fused_ordering(299) 00:09:16.697 fused_ordering(300) 00:09:16.697 fused_ordering(301) 00:09:16.697 fused_ordering(302) 00:09:16.697 fused_ordering(303) 00:09:16.697 fused_ordering(304) 00:09:16.697 fused_ordering(305) 00:09:16.697 fused_ordering(306) 00:09:16.697 fused_ordering(307) 00:09:16.697 fused_ordering(308) 00:09:16.697 fused_ordering(309) 00:09:16.697 fused_ordering(310) 00:09:16.697 fused_ordering(311) 00:09:16.697 fused_ordering(312) 00:09:16.697 fused_ordering(313) 00:09:16.697 fused_ordering(314) 00:09:16.697 fused_ordering(315) 00:09:16.697 fused_ordering(316) 00:09:16.697 fused_ordering(317) 00:09:16.697 fused_ordering(318) 00:09:16.697 fused_ordering(319) 00:09:16.697 fused_ordering(320) 00:09:16.697 fused_ordering(321) 00:09:16.697 fused_ordering(322) 00:09:16.697 fused_ordering(323) 00:09:16.697 fused_ordering(324) 00:09:16.697 fused_ordering(325) 00:09:16.697 fused_ordering(326) 00:09:16.697 fused_ordering(327) 00:09:16.697 fused_ordering(328) 00:09:16.697 fused_ordering(329) 00:09:16.697 fused_ordering(330) 00:09:16.697 fused_ordering(331) 00:09:16.697 fused_ordering(332) 00:09:16.697 fused_ordering(333) 00:09:16.697 fused_ordering(334) 00:09:16.697 fused_ordering(335) 00:09:16.697 fused_ordering(336) 00:09:16.697 fused_ordering(337) 00:09:16.697 fused_ordering(338) 00:09:16.697 fused_ordering(339) 00:09:16.697 fused_ordering(340) 00:09:16.697 fused_ordering(341) 00:09:16.697 fused_ordering(342) 00:09:16.697 fused_ordering(343) 00:09:16.697 fused_ordering(344) 00:09:16.697 fused_ordering(345) 00:09:16.697 fused_ordering(346) 00:09:16.697 fused_ordering(347) 00:09:16.697 fused_ordering(348) 00:09:16.697 fused_ordering(349) 00:09:16.697 fused_ordering(350) 00:09:16.697 fused_ordering(351) 00:09:16.697 fused_ordering(352) 00:09:16.697 fused_ordering(353) 00:09:16.697 fused_ordering(354) 00:09:16.697 fused_ordering(355) 00:09:16.697 fused_ordering(356) 00:09:16.697 fused_ordering(357) 00:09:16.697 fused_ordering(358) 00:09:16.697 fused_ordering(359) 00:09:16.697 fused_ordering(360) 00:09:16.697 fused_ordering(361) 00:09:16.697 fused_ordering(362) 00:09:16.697 fused_ordering(363) 00:09:16.697 fused_ordering(364) 00:09:16.697 fused_ordering(365) 00:09:16.697 fused_ordering(366) 00:09:16.697 fused_ordering(367) 00:09:16.697 fused_ordering(368) 00:09:16.697 fused_ordering(369) 00:09:16.697 fused_ordering(370) 00:09:16.697 fused_ordering(371) 00:09:16.697 fused_ordering(372) 00:09:16.697 fused_ordering(373) 00:09:16.697 fused_ordering(374) 00:09:16.697 fused_ordering(375) 00:09:16.697 fused_ordering(376) 00:09:16.697 fused_ordering(377) 00:09:16.697 fused_ordering(378) 00:09:16.697 fused_ordering(379) 00:09:16.697 fused_ordering(380) 00:09:16.697 fused_ordering(381) 00:09:16.697 fused_ordering(382) 00:09:16.697 fused_ordering(383) 00:09:16.697 fused_ordering(384) 00:09:16.697 fused_ordering(385) 00:09:16.697 fused_ordering(386) 00:09:16.697 fused_ordering(387) 00:09:16.697 fused_ordering(388) 00:09:16.697 fused_ordering(389) 00:09:16.697 fused_ordering(390) 00:09:16.697 fused_ordering(391) 00:09:16.697 fused_ordering(392) 00:09:16.697 fused_ordering(393) 00:09:16.697 fused_ordering(394) 00:09:16.697 fused_ordering(395) 00:09:16.697 fused_ordering(396) 00:09:16.697 fused_ordering(397) 00:09:16.697 fused_ordering(398) 00:09:16.697 fused_ordering(399) 00:09:16.697 fused_ordering(400) 00:09:16.697 fused_ordering(401) 00:09:16.697 fused_ordering(402) 00:09:16.697 fused_ordering(403) 00:09:16.697 fused_ordering(404) 00:09:16.697 fused_ordering(405) 00:09:16.697 fused_ordering(406) 00:09:16.697 fused_ordering(407) 00:09:16.697 fused_ordering(408) 00:09:16.697 fused_ordering(409) 00:09:16.697 fused_ordering(410) 00:09:17.262 fused_ordering(411) 00:09:17.262 fused_ordering(412) 00:09:17.262 fused_ordering(413) 00:09:17.262 fused_ordering(414) 00:09:17.262 fused_ordering(415) 00:09:17.262 fused_ordering(416) 00:09:17.262 fused_ordering(417) 00:09:17.262 fused_ordering(418) 00:09:17.262 fused_ordering(419) 00:09:17.262 fused_ordering(420) 00:09:17.262 fused_ordering(421) 00:09:17.262 fused_ordering(422) 00:09:17.262 fused_ordering(423) 00:09:17.262 fused_ordering(424) 00:09:17.262 fused_ordering(425) 00:09:17.262 fused_ordering(426) 00:09:17.262 fused_ordering(427) 00:09:17.262 fused_ordering(428) 00:09:17.262 fused_ordering(429) 00:09:17.262 fused_ordering(430) 00:09:17.262 fused_ordering(431) 00:09:17.262 fused_ordering(432) 00:09:17.262 fused_ordering(433) 00:09:17.262 fused_ordering(434) 00:09:17.262 fused_ordering(435) 00:09:17.262 fused_ordering(436) 00:09:17.262 fused_ordering(437) 00:09:17.262 fused_ordering(438) 00:09:17.262 fused_ordering(439) 00:09:17.262 fused_ordering(440) 00:09:17.262 fused_ordering(441) 00:09:17.262 fused_ordering(442) 00:09:17.262 fused_ordering(443) 00:09:17.262 fused_ordering(444) 00:09:17.262 fused_ordering(445) 00:09:17.262 fused_ordering(446) 00:09:17.262 fused_ordering(447) 00:09:17.262 fused_ordering(448) 00:09:17.262 fused_ordering(449) 00:09:17.262 fused_ordering(450) 00:09:17.262 fused_ordering(451) 00:09:17.262 fused_ordering(452) 00:09:17.262 fused_ordering(453) 00:09:17.262 fused_ordering(454) 00:09:17.262 fused_ordering(455) 00:09:17.262 fused_ordering(456) 00:09:17.262 fused_ordering(457) 00:09:17.262 fused_ordering(458) 00:09:17.262 fused_ordering(459) 00:09:17.262 fused_ordering(460) 00:09:17.262 fused_ordering(461) 00:09:17.262 fused_ordering(462) 00:09:17.262 fused_ordering(463) 00:09:17.262 fused_ordering(464) 00:09:17.262 fused_ordering(465) 00:09:17.262 fused_ordering(466) 00:09:17.262 fused_ordering(467) 00:09:17.262 fused_ordering(468) 00:09:17.262 fused_ordering(469) 00:09:17.262 fused_ordering(470) 00:09:17.262 fused_ordering(471) 00:09:17.262 fused_ordering(472) 00:09:17.262 fused_ordering(473) 00:09:17.262 fused_ordering(474) 00:09:17.262 fused_ordering(475) 00:09:17.262 fused_ordering(476) 00:09:17.262 fused_ordering(477) 00:09:17.262 fused_ordering(478) 00:09:17.262 fused_ordering(479) 00:09:17.262 fused_ordering(480) 00:09:17.262 fused_ordering(481) 00:09:17.262 fused_ordering(482) 00:09:17.262 fused_ordering(483) 00:09:17.262 fused_ordering(484) 00:09:17.262 fused_ordering(485) 00:09:17.262 fused_ordering(486) 00:09:17.262 fused_ordering(487) 00:09:17.262 fused_ordering(488) 00:09:17.262 fused_ordering(489) 00:09:17.262 fused_ordering(490) 00:09:17.262 fused_ordering(491) 00:09:17.262 fused_ordering(492) 00:09:17.262 fused_ordering(493) 00:09:17.262 fused_ordering(494) 00:09:17.262 fused_ordering(495) 00:09:17.262 fused_ordering(496) 00:09:17.262 fused_ordering(497) 00:09:17.262 fused_ordering(498) 00:09:17.262 fused_ordering(499) 00:09:17.262 fused_ordering(500) 00:09:17.262 fused_ordering(501) 00:09:17.262 fused_ordering(502) 00:09:17.262 fused_ordering(503) 00:09:17.262 fused_ordering(504) 00:09:17.262 fused_ordering(505) 00:09:17.262 fused_ordering(506) 00:09:17.262 fused_ordering(507) 00:09:17.262 fused_ordering(508) 00:09:17.262 fused_ordering(509) 00:09:17.262 fused_ordering(510) 00:09:17.262 fused_ordering(511) 00:09:17.262 fused_ordering(512) 00:09:17.262 fused_ordering(513) 00:09:17.262 fused_ordering(514) 00:09:17.262 fused_ordering(515) 00:09:17.262 fused_ordering(516) 00:09:17.262 fused_ordering(517) 00:09:17.262 fused_ordering(518) 00:09:17.262 fused_ordering(519) 00:09:17.262 fused_ordering(520) 00:09:17.262 fused_ordering(521) 00:09:17.262 fused_ordering(522) 00:09:17.262 fused_ordering(523) 00:09:17.262 fused_ordering(524) 00:09:17.262 fused_ordering(525) 00:09:17.262 fused_ordering(526) 00:09:17.262 fused_ordering(527) 00:09:17.262 fused_ordering(528) 00:09:17.262 fused_ordering(529) 00:09:17.262 fused_ordering(530) 00:09:17.262 fused_ordering(531) 00:09:17.262 fused_ordering(532) 00:09:17.262 fused_ordering(533) 00:09:17.262 fused_ordering(534) 00:09:17.262 fused_ordering(535) 00:09:17.263 fused_ordering(536) 00:09:17.263 fused_ordering(537) 00:09:17.263 fused_ordering(538) 00:09:17.263 fused_ordering(539) 00:09:17.263 fused_ordering(540) 00:09:17.263 fused_ordering(541) 00:09:17.263 fused_ordering(542) 00:09:17.263 fused_ordering(543) 00:09:17.263 fused_ordering(544) 00:09:17.263 fused_ordering(545) 00:09:17.263 fused_ordering(546) 00:09:17.263 fused_ordering(547) 00:09:17.263 fused_ordering(548) 00:09:17.263 fused_ordering(549) 00:09:17.263 fused_ordering(550) 00:09:17.263 fused_ordering(551) 00:09:17.263 fused_ordering(552) 00:09:17.263 fused_ordering(553) 00:09:17.263 fused_ordering(554) 00:09:17.263 fused_ordering(555) 00:09:17.263 fused_ordering(556) 00:09:17.263 fused_ordering(557) 00:09:17.263 fused_ordering(558) 00:09:17.263 fused_ordering(559) 00:09:17.263 fused_ordering(560) 00:09:17.263 fused_ordering(561) 00:09:17.263 fused_ordering(562) 00:09:17.263 fused_ordering(563) 00:09:17.263 fused_ordering(564) 00:09:17.263 fused_ordering(565) 00:09:17.263 fused_ordering(566) 00:09:17.263 fused_ordering(567) 00:09:17.263 fused_ordering(568) 00:09:17.263 fused_ordering(569) 00:09:17.263 fused_ordering(570) 00:09:17.263 fused_ordering(571) 00:09:17.263 fused_ordering(572) 00:09:17.263 fused_ordering(573) 00:09:17.263 fused_ordering(574) 00:09:17.263 fused_ordering(575) 00:09:17.263 fused_ordering(576) 00:09:17.263 fused_ordering(577) 00:09:17.263 fused_ordering(578) 00:09:17.263 fused_ordering(579) 00:09:17.263 fused_ordering(580) 00:09:17.263 fused_ordering(581) 00:09:17.263 fused_ordering(582) 00:09:17.263 fused_ordering(583) 00:09:17.263 fused_ordering(584) 00:09:17.263 fused_ordering(585) 00:09:17.263 fused_ordering(586) 00:09:17.263 fused_ordering(587) 00:09:17.263 fused_ordering(588) 00:09:17.263 fused_ordering(589) 00:09:17.263 fused_ordering(590) 00:09:17.263 fused_ordering(591) 00:09:17.263 fused_ordering(592) 00:09:17.263 fused_ordering(593) 00:09:17.263 fused_ordering(594) 00:09:17.263 fused_ordering(595) 00:09:17.263 fused_ordering(596) 00:09:17.263 fused_ordering(597) 00:09:17.263 fused_ordering(598) 00:09:17.263 fused_ordering(599) 00:09:17.263 fused_ordering(600) 00:09:17.263 fused_ordering(601) 00:09:17.263 fused_ordering(602) 00:09:17.263 fused_ordering(603) 00:09:17.263 fused_ordering(604) 00:09:17.263 fused_ordering(605) 00:09:17.263 fused_ordering(606) 00:09:17.263 fused_ordering(607) 00:09:17.263 fused_ordering(608) 00:09:17.263 fused_ordering(609) 00:09:17.263 fused_ordering(610) 00:09:17.263 fused_ordering(611) 00:09:17.263 fused_ordering(612) 00:09:17.263 fused_ordering(613) 00:09:17.263 fused_ordering(614) 00:09:17.263 fused_ordering(615) 00:09:17.522 fused_ordering(616) 00:09:17.522 fused_ordering(617) 00:09:17.522 fused_ordering(618) 00:09:17.522 fused_ordering(619) 00:09:17.522 fused_ordering(620) 00:09:17.522 fused_ordering(621) 00:09:17.522 fused_ordering(622) 00:09:17.522 fused_ordering(623) 00:09:17.522 fused_ordering(624) 00:09:17.522 fused_ordering(625) 00:09:17.522 fused_ordering(626) 00:09:17.522 fused_ordering(627) 00:09:17.522 fused_ordering(628) 00:09:17.522 fused_ordering(629) 00:09:17.522 fused_ordering(630) 00:09:17.522 fused_ordering(631) 00:09:17.522 fused_ordering(632) 00:09:17.522 fused_ordering(633) 00:09:17.522 fused_ordering(634) 00:09:17.522 fused_ordering(635) 00:09:17.522 fused_ordering(636) 00:09:17.522 fused_ordering(637) 00:09:17.522 fused_ordering(638) 00:09:17.522 fused_ordering(639) 00:09:17.522 fused_ordering(640) 00:09:17.522 fused_ordering(641) 00:09:17.522 fused_ordering(642) 00:09:17.522 fused_ordering(643) 00:09:17.522 fused_ordering(644) 00:09:17.522 fused_ordering(645) 00:09:17.522 fused_ordering(646) 00:09:17.522 fused_ordering(647) 00:09:17.522 fused_ordering(648) 00:09:17.522 fused_ordering(649) 00:09:17.522 fused_ordering(650) 00:09:17.522 fused_ordering(651) 00:09:17.522 fused_ordering(652) 00:09:17.522 fused_ordering(653) 00:09:17.522 fused_ordering(654) 00:09:17.522 fused_ordering(655) 00:09:17.522 fused_ordering(656) 00:09:17.522 fused_ordering(657) 00:09:17.522 fused_ordering(658) 00:09:17.522 fused_ordering(659) 00:09:17.522 fused_ordering(660) 00:09:17.522 fused_ordering(661) 00:09:17.522 fused_ordering(662) 00:09:17.522 fused_ordering(663) 00:09:17.522 fused_ordering(664) 00:09:17.522 fused_ordering(665) 00:09:17.522 fused_ordering(666) 00:09:17.522 fused_ordering(667) 00:09:17.522 fused_ordering(668) 00:09:17.522 fused_ordering(669) 00:09:17.522 fused_ordering(670) 00:09:17.522 fused_ordering(671) 00:09:17.522 fused_ordering(672) 00:09:17.522 fused_ordering(673) 00:09:17.522 fused_ordering(674) 00:09:17.522 fused_ordering(675) 00:09:17.522 fused_ordering(676) 00:09:17.522 fused_ordering(677) 00:09:17.522 fused_ordering(678) 00:09:17.522 fused_ordering(679) 00:09:17.522 fused_ordering(680) 00:09:17.522 fused_ordering(681) 00:09:17.522 fused_ordering(682) 00:09:17.522 fused_ordering(683) 00:09:17.522 fused_ordering(684) 00:09:17.522 fused_ordering(685) 00:09:17.522 fused_ordering(686) 00:09:17.522 fused_ordering(687) 00:09:17.522 fused_ordering(688) 00:09:17.522 fused_ordering(689) 00:09:17.522 fused_ordering(690) 00:09:17.522 fused_ordering(691) 00:09:17.522 fused_ordering(692) 00:09:17.522 fused_ordering(693) 00:09:17.522 fused_ordering(694) 00:09:17.522 fused_ordering(695) 00:09:17.522 fused_ordering(696) 00:09:17.522 fused_ordering(697) 00:09:17.522 fused_ordering(698) 00:09:17.522 fused_ordering(699) 00:09:17.522 fused_ordering(700) 00:09:17.522 fused_ordering(701) 00:09:17.522 fused_ordering(702) 00:09:17.522 fused_ordering(703) 00:09:17.522 fused_ordering(704) 00:09:17.522 fused_ordering(705) 00:09:17.522 fused_ordering(706) 00:09:17.522 fused_ordering(707) 00:09:17.522 fused_ordering(708) 00:09:17.522 fused_ordering(709) 00:09:17.522 fused_ordering(710) 00:09:17.522 fused_ordering(711) 00:09:17.522 fused_ordering(712) 00:09:17.522 fused_ordering(713) 00:09:17.522 fused_ordering(714) 00:09:17.522 fused_ordering(715) 00:09:17.522 fused_ordering(716) 00:09:17.522 fused_ordering(717) 00:09:17.522 fused_ordering(718) 00:09:17.522 fused_ordering(719) 00:09:17.522 fused_ordering(720) 00:09:17.522 fused_ordering(721) 00:09:17.522 fused_ordering(722) 00:09:17.522 fused_ordering(723) 00:09:17.522 fused_ordering(724) 00:09:17.522 fused_ordering(725) 00:09:17.522 fused_ordering(726) 00:09:17.522 fused_ordering(727) 00:09:17.522 fused_ordering(728) 00:09:17.522 fused_ordering(729) 00:09:17.522 fused_ordering(730) 00:09:17.522 fused_ordering(731) 00:09:17.522 fused_ordering(732) 00:09:17.522 fused_ordering(733) 00:09:17.522 fused_ordering(734) 00:09:17.522 fused_ordering(735) 00:09:17.522 fused_ordering(736) 00:09:17.522 fused_ordering(737) 00:09:17.522 fused_ordering(738) 00:09:17.522 fused_ordering(739) 00:09:17.522 fused_ordering(740) 00:09:17.522 fused_ordering(741) 00:09:17.522 fused_ordering(742) 00:09:17.522 fused_ordering(743) 00:09:17.522 fused_ordering(744) 00:09:17.522 fused_ordering(745) 00:09:17.522 fused_ordering(746) 00:09:17.522 fused_ordering(747) 00:09:17.522 fused_ordering(748) 00:09:17.522 fused_ordering(749) 00:09:17.522 fused_ordering(750) 00:09:17.522 fused_ordering(751) 00:09:17.522 fused_ordering(752) 00:09:17.522 fused_ordering(753) 00:09:17.522 fused_ordering(754) 00:09:17.522 fused_ordering(755) 00:09:17.522 fused_ordering(756) 00:09:17.522 fused_ordering(757) 00:09:17.522 fused_ordering(758) 00:09:17.522 fused_ordering(759) 00:09:17.522 fused_ordering(760) 00:09:17.522 fused_ordering(761) 00:09:17.522 fused_ordering(762) 00:09:17.522 fused_ordering(763) 00:09:17.522 fused_ordering(764) 00:09:17.522 fused_ordering(765) 00:09:17.522 fused_ordering(766) 00:09:17.522 fused_ordering(767) 00:09:17.522 fused_ordering(768) 00:09:17.522 fused_ordering(769) 00:09:17.522 fused_ordering(770) 00:09:17.522 fused_ordering(771) 00:09:17.522 fused_ordering(772) 00:09:17.522 fused_ordering(773) 00:09:17.522 fused_ordering(774) 00:09:17.522 fused_ordering(775) 00:09:17.522 fused_ordering(776) 00:09:17.522 fused_ordering(777) 00:09:17.522 fused_ordering(778) 00:09:17.522 fused_ordering(779) 00:09:17.522 fused_ordering(780) 00:09:17.522 fused_ordering(781) 00:09:17.522 fused_ordering(782) 00:09:17.522 fused_ordering(783) 00:09:17.522 fused_ordering(784) 00:09:17.522 fused_ordering(785) 00:09:17.522 fused_ordering(786) 00:09:17.522 fused_ordering(787) 00:09:17.522 fused_ordering(788) 00:09:17.522 fused_ordering(789) 00:09:17.522 fused_ordering(790) 00:09:17.522 fused_ordering(791) 00:09:17.522 fused_ordering(792) 00:09:17.522 fused_ordering(793) 00:09:17.522 fused_ordering(794) 00:09:17.522 fused_ordering(795) 00:09:17.522 fused_ordering(796) 00:09:17.522 fused_ordering(797) 00:09:17.522 fused_ordering(798) 00:09:17.522 fused_ordering(799) 00:09:17.522 fused_ordering(800) 00:09:17.522 fused_ordering(801) 00:09:17.522 fused_ordering(802) 00:09:17.522 fused_ordering(803) 00:09:17.522 fused_ordering(804) 00:09:17.522 fused_ordering(805) 00:09:17.522 fused_ordering(806) 00:09:17.522 fused_ordering(807) 00:09:17.522 fused_ordering(808) 00:09:17.522 fused_ordering(809) 00:09:17.522 fused_ordering(810) 00:09:17.522 fused_ordering(811) 00:09:17.522 fused_ordering(812) 00:09:17.522 fused_ordering(813) 00:09:17.522 fused_ordering(814) 00:09:17.522 fused_ordering(815) 00:09:17.522 fused_ordering(816) 00:09:17.522 fused_ordering(817) 00:09:17.522 fused_ordering(818) 00:09:17.522 fused_ordering(819) 00:09:17.522 fused_ordering(820) 00:09:18.089 fused_ordering(821) 00:09:18.089 fused_ordering(822) 00:09:18.089 fused_ordering(823) 00:09:18.089 fused_ordering(824) 00:09:18.089 fused_ordering(825) 00:09:18.089 fused_ordering(826) 00:09:18.089 fused_ordering(827) 00:09:18.089 fused_ordering(828) 00:09:18.089 fused_ordering(829) 00:09:18.089 fused_ordering(830) 00:09:18.089 fused_ordering(831) 00:09:18.089 fused_ordering(832) 00:09:18.089 fused_ordering(833) 00:09:18.089 fused_ordering(834) 00:09:18.089 fused_ordering(835) 00:09:18.089 fused_ordering(836) 00:09:18.089 fused_ordering(837) 00:09:18.089 fused_ordering(838) 00:09:18.089 fused_ordering(839) 00:09:18.089 fused_ordering(840) 00:09:18.089 fused_ordering(841) 00:09:18.089 fused_ordering(842) 00:09:18.089 fused_ordering(843) 00:09:18.089 fused_ordering(844) 00:09:18.089 fused_ordering(845) 00:09:18.089 fused_ordering(846) 00:09:18.089 fused_ordering(847) 00:09:18.089 fused_ordering(848) 00:09:18.089 fused_ordering(849) 00:09:18.089 fused_ordering(850) 00:09:18.089 fused_ordering(851) 00:09:18.089 fused_ordering(852) 00:09:18.089 fused_ordering(853) 00:09:18.089 fused_ordering(854) 00:09:18.089 fused_ordering(855) 00:09:18.089 fused_ordering(856) 00:09:18.089 fused_ordering(857) 00:09:18.089 fused_ordering(858) 00:09:18.089 fused_ordering(859) 00:09:18.089 fused_ordering(860) 00:09:18.089 fused_ordering(861) 00:09:18.089 fused_ordering(862) 00:09:18.089 fused_ordering(863) 00:09:18.089 fused_ordering(864) 00:09:18.089 fused_ordering(865) 00:09:18.089 fused_ordering(866) 00:09:18.089 fused_ordering(867) 00:09:18.089 fused_ordering(868) 00:09:18.089 fused_ordering(869) 00:09:18.089 fused_ordering(870) 00:09:18.089 fused_ordering(871) 00:09:18.089 fused_ordering(872) 00:09:18.089 fused_ordering(873) 00:09:18.089 fused_ordering(874) 00:09:18.089 fused_ordering(875) 00:09:18.089 fused_ordering(876) 00:09:18.089 fused_ordering(877) 00:09:18.089 fused_ordering(878) 00:09:18.089 fused_ordering(879) 00:09:18.089 fused_ordering(880) 00:09:18.089 fused_ordering(881) 00:09:18.089 fused_ordering(882) 00:09:18.089 fused_ordering(883) 00:09:18.089 fused_ordering(884) 00:09:18.089 fused_ordering(885) 00:09:18.089 fused_ordering(886) 00:09:18.089 fused_ordering(887) 00:09:18.089 fused_ordering(888) 00:09:18.089 fused_ordering(889) 00:09:18.089 fused_ordering(890) 00:09:18.089 fused_ordering(891) 00:09:18.089 fused_ordering(892) 00:09:18.089 fused_ordering(893) 00:09:18.089 fused_ordering(894) 00:09:18.089 fused_ordering(895) 00:09:18.089 fused_ordering(896) 00:09:18.089 fused_ordering(897) 00:09:18.089 fused_ordering(898) 00:09:18.089 fused_ordering(899) 00:09:18.089 fused_ordering(900) 00:09:18.089 fused_ordering(901) 00:09:18.089 fused_ordering(902) 00:09:18.089 fused_ordering(903) 00:09:18.089 fused_ordering(904) 00:09:18.089 fused_ordering(905) 00:09:18.089 fused_ordering(906) 00:09:18.089 fused_ordering(907) 00:09:18.089 fused_ordering(908) 00:09:18.089 fused_ordering(909) 00:09:18.089 fused_ordering(910) 00:09:18.089 fused_ordering(911) 00:09:18.089 fused_ordering(912) 00:09:18.089 fused_ordering(913) 00:09:18.089 fused_ordering(914) 00:09:18.089 fused_ordering(915) 00:09:18.089 fused_ordering(916) 00:09:18.089 fused_ordering(917) 00:09:18.089 fused_ordering(918) 00:09:18.089 fused_ordering(919) 00:09:18.089 fused_ordering(920) 00:09:18.090 fused_ordering(921) 00:09:18.090 fused_ordering(922) 00:09:18.090 fused_ordering(923) 00:09:18.090 fused_ordering(924) 00:09:18.090 fused_ordering(925) 00:09:18.090 fused_ordering(926) 00:09:18.090 fused_ordering(927) 00:09:18.090 fused_ordering(928) 00:09:18.090 fused_ordering(929) 00:09:18.090 fused_ordering(930) 00:09:18.090 fused_ordering(931) 00:09:18.090 fused_ordering(932) 00:09:18.090 fused_ordering(933) 00:09:18.090 fused_ordering(934) 00:09:18.090 fused_ordering(935) 00:09:18.090 fused_ordering(936) 00:09:18.090 fused_ordering(937) 00:09:18.090 fused_ordering(938) 00:09:18.090 fused_ordering(939) 00:09:18.090 fused_ordering(940) 00:09:18.090 fused_ordering(941) 00:09:18.090 fused_ordering(942) 00:09:18.090 fused_ordering(943) 00:09:18.090 fused_ordering(944) 00:09:18.090 fused_ordering(945) 00:09:18.090 fused_ordering(946) 00:09:18.090 fused_ordering(947) 00:09:18.090 fused_ordering(948) 00:09:18.090 fused_ordering(949) 00:09:18.090 fused_ordering(950) 00:09:18.090 fused_ordering(951) 00:09:18.090 fused_ordering(952) 00:09:18.090 fused_ordering(953) 00:09:18.090 fused_ordering(954) 00:09:18.090 fused_ordering(955) 00:09:18.090 fused_ordering(956) 00:09:18.090 fused_ordering(957) 00:09:18.090 fused_ordering(958) 00:09:18.090 fused_ordering(959) 00:09:18.090 fused_ordering(960) 00:09:18.090 fused_ordering(961) 00:09:18.090 fused_ordering(962) 00:09:18.090 fused_ordering(963) 00:09:18.090 fused_ordering(964) 00:09:18.090 fused_ordering(965) 00:09:18.090 fused_ordering(966) 00:09:18.090 fused_ordering(967) 00:09:18.090 fused_ordering(968) 00:09:18.090 fused_ordering(969) 00:09:18.090 fused_ordering(970) 00:09:18.090 fused_ordering(971) 00:09:18.090 fused_ordering(972) 00:09:18.090 fused_ordering(973) 00:09:18.090 fused_ordering(974) 00:09:18.090 fused_ordering(975) 00:09:18.090 fused_ordering(976) 00:09:18.090 fused_ordering(977) 00:09:18.090 fused_ordering(978) 00:09:18.090 fused_ordering(979) 00:09:18.090 fused_ordering(980) 00:09:18.090 fused_ordering(981) 00:09:18.090 fused_ordering(982) 00:09:18.090 fused_ordering(983) 00:09:18.090 fused_ordering(984) 00:09:18.090 fused_ordering(985) 00:09:18.090 fused_ordering(986) 00:09:18.090 fused_ordering(987) 00:09:18.090 fused_ordering(988) 00:09:18.090 fused_ordering(989) 00:09:18.090 fused_ordering(990) 00:09:18.090 fused_ordering(991) 00:09:18.090 fused_ordering(992) 00:09:18.090 fused_ordering(993) 00:09:18.090 fused_ordering(994) 00:09:18.090 fused_ordering(995) 00:09:18.090 fused_ordering(996) 00:09:18.090 fused_ordering(997) 00:09:18.090 fused_ordering(998) 00:09:18.090 fused_ordering(999) 00:09:18.090 fused_ordering(1000) 00:09:18.090 fused_ordering(1001) 00:09:18.090 fused_ordering(1002) 00:09:18.090 fused_ordering(1003) 00:09:18.090 fused_ordering(1004) 00:09:18.090 fused_ordering(1005) 00:09:18.090 fused_ordering(1006) 00:09:18.090 fused_ordering(1007) 00:09:18.090 fused_ordering(1008) 00:09:18.090 fused_ordering(1009) 00:09:18.090 fused_ordering(1010) 00:09:18.090 fused_ordering(1011) 00:09:18.090 fused_ordering(1012) 00:09:18.090 fused_ordering(1013) 00:09:18.090 fused_ordering(1014) 00:09:18.090 fused_ordering(1015) 00:09:18.090 fused_ordering(1016) 00:09:18.090 fused_ordering(1017) 00:09:18.090 fused_ordering(1018) 00:09:18.090 fused_ordering(1019) 00:09:18.090 fused_ordering(1020) 00:09:18.090 fused_ordering(1021) 00:09:18.090 fused_ordering(1022) 00:09:18.090 fused_ordering(1023) 00:09:18.090 22:04:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:18.090 22:04:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:18.090 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.090 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.432 rmmod nvme_tcp 00:09:18.432 rmmod nvme_fabrics 00:09:18.432 rmmod nvme_keyring 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 71332 ']' 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 71332 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 71332 ']' 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 71332 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71332 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:18.432 killing process with pid 71332 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71332' 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 71332 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 71332 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:18.432 00:09:18.432 real 0m4.309s 00:09:18.432 user 0m5.313s 00:09:18.432 sys 0m1.394s 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.432 22:04:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:18.432 ************************************ 00:09:18.432 END TEST nvmf_fused_ordering 00:09:18.432 ************************************ 00:09:18.691 22:04:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:18.691 22:04:05 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:18.691 22:04:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:18.691 22:04:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.691 22:04:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:18.691 ************************************ 00:09:18.691 START TEST nvmf_delete_subsystem 00:09:18.691 ************************************ 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:18.691 * Looking for test storage... 00:09:18.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.691 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:18.692 Cannot find device "nvmf_tgt_br" 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:18.692 Cannot find device "nvmf_tgt_br2" 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:18.692 Cannot find device "nvmf_tgt_br" 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:18.692 Cannot find device "nvmf_tgt_br2" 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:18.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:18.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:18.692 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:18.951 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:18.951 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:18.951 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:18.951 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:18.951 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:18.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:09:18.952 00:09:18.952 --- 10.0.0.2 ping statistics --- 00:09:18.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.952 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:18.952 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:18.952 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:09:18.952 00:09:18.952 --- 10.0.0.3 ping statistics --- 00:09:18.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.952 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:18.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:09:18.952 00:09:18.952 --- 10.0.0.1 ping statistics --- 00:09:18.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.952 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71601 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71601 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 71601 ']' 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.952 22:04:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:19.211 [2024-07-15 22:04:05.928869] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:09:19.211 [2024-07-15 22:04:05.929007] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.211 [2024-07-15 22:04:06.070859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:19.211 [2024-07-15 22:04:06.159132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.211 [2024-07-15 22:04:06.159224] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.211 [2024-07-15 22:04:06.159246] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.211 [2024-07-15 22:04:06.159260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.211 [2024-07-15 22:04:06.159272] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.211 [2024-07-15 22:04:06.159408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.211 [2024-07-15 22:04:06.159431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.147 [2024-07-15 22:04:06.921731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.147 [2024-07-15 22:04:06.941877] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.147 NULL1 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.147 Delay0 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71652 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:20.147 22:04:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:20.406 [2024-07-15 22:04:07.144056] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:22.308 22:04:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.308 22:04:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.308 22:04:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 starting I/O failed: -6 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 starting I/O failed: -6 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 starting I/O failed: -6 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 starting I/O failed: -6 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 starting I/O failed: -6 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 starting I/O failed: -6 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 starting I/O failed: -6 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 starting I/O failed: -6 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 starting I/O failed: -6 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 starting I/O failed: -6 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 starting I/O failed: -6 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 [2024-07-15 22:04:09.177837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164da80 is same with the state(5) to be set 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 [2024-07-15 22:04:09.178934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a8d0 is same with the state(5) to be set 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 Read completed with error (sct=0, sc=8) 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.308 starting I/O failed: -6 00:09:22.308 Write completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 starting I/O failed: -6 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 starting I/O failed: -6 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 starting I/O failed: -6 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 starting I/O failed: -6 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 starting I/O failed: -6 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 starting I/O failed: -6 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 starting I/O failed: -6 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 starting I/O failed: -6 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 [2024-07-15 22:04:09.183441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2f6000d2f0 is same with the state(5) to be set 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:22.309 Read completed with error (sct=0, sc=8) 00:09:22.309 Write completed with error (sct=0, sc=8) 00:09:23.243 [2024-07-15 22:04:10.161877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a510 is same with the state(5) to be set 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 [2024-07-15 22:04:10.179097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a6f0 is same with the state(5) to be set 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 [2024-07-15 22:04:10.179653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164c4c0 is same with the state(5) to be set 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 [2024-07-15 22:04:10.181802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2f6000cfe0 is same with the state(5) to be set 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Write completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 Read completed with error (sct=0, sc=8) 00:09:23.244 [2024-07-15 22:04:10.182938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2f6000d600 is same with the state(5) to be set 00:09:23.244 Initializing NVMe Controllers 00:09:23.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:23.244 Controller IO queue size 128, less than required. 00:09:23.244 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:23.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:23.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:23.244 Initialization complete. Launching workers. 00:09:23.244 ======================================================== 00:09:23.244 Latency(us) 00:09:23.244 Device Information : IOPS MiB/s Average min max 00:09:23.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.99 0.08 905682.02 1102.71 1011130.43 00:09:23.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.55 0.08 991194.82 2151.17 2002209.90 00:09:23.244 ======================================================== 00:09:23.244 Total : 320.54 0.16 947178.93 1102.71 2002209.90 00:09:23.244 00:09:23.244 [2024-07-15 22:04:10.183581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162a510 (9): Bad file descriptor 00:09:23.244 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:23.244 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.244 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:23.244 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71652 00:09:23.244 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71652 00:09:23.810 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71652) - No such process 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71652 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71652 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71652 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.810 [2024-07-15 22:04:10.709753] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71703 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71703 00:09:23.810 22:04:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:24.066 [2024-07-15 22:04:10.878291] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:24.324 22:04:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:24.324 22:04:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71703 00:09:24.324 22:04:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:24.906 22:04:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:24.906 22:04:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71703 00:09:24.906 22:04:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:25.495 22:04:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:25.495 22:04:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71703 00:09:25.495 22:04:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:26.060 22:04:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:26.060 22:04:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71703 00:09:26.060 22:04:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:26.317 22:04:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:26.317 22:04:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71703 00:09:26.317 22:04:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:26.892 22:04:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:26.892 22:04:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71703 00:09:26.892 22:04:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:27.150 Initializing NVMe Controllers 00:09:27.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:27.150 Controller IO queue size 128, less than required. 00:09:27.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:27.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:27.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:27.150 Initialization complete. Launching workers. 00:09:27.150 ======================================================== 00:09:27.150 Latency(us) 00:09:27.150 Device Information : IOPS MiB/s Average min max 00:09:27.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003840.71 1000226.83 1040787.34 00:09:27.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005541.37 1000173.06 1041745.23 00:09:27.150 ======================================================== 00:09:27.150 Total : 256.00 0.12 1004691.04 1000173.06 1041745.23 00:09:27.150 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71703 00:09:27.409 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71703) - No such process 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71703 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.409 rmmod nvme_tcp 00:09:27.409 rmmod nvme_fabrics 00:09:27.409 rmmod nvme_keyring 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71601 ']' 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71601 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 71601 ']' 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 71601 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:27.409 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71601 00:09:27.666 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:27.666 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:27.666 killing process with pid 71601 00:09:27.666 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71601' 00:09:27.666 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 71601 00:09:27.666 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 71601 00:09:27.666 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.667 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.667 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.667 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.667 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.667 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.667 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.667 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.667 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:27.667 ************************************ 00:09:27.667 END TEST nvmf_delete_subsystem 00:09:27.667 ************************************ 00:09:27.667 00:09:27.667 real 0m9.187s 00:09:27.667 user 0m28.359s 00:09:27.667 sys 0m1.572s 00:09:27.667 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.667 22:04:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.667 22:04:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:27.667 22:04:14 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:27.667 22:04:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:27.667 22:04:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.667 22:04:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.927 ************************************ 00:09:27.927 START TEST nvmf_ns_masking 00:09:27.927 ************************************ 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:27.927 * Looking for test storage... 00:09:27.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=93975998-7a70-48e3-84bd-509fa09294e7 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=778ab7f8-5845-42dd-92ed-de126ff4e002 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=90c537c4-41a9-4128-8676-13c07ae13ada 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:27.927 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:27.928 Cannot find device "nvmf_tgt_br" 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.928 Cannot find device "nvmf_tgt_br2" 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:27.928 Cannot find device "nvmf_tgt_br" 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:27.928 Cannot find device "nvmf_tgt_br2" 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:27.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:27.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:27.928 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:28.186 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:28.186 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:28.187 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:28.187 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:28.187 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:28.187 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:28.187 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:28.187 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:28.187 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:28.187 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:28.187 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:28.187 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:28.187 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:28.187 22:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:28.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:09:28.187 00:09:28.187 --- 10.0.0.2 ping statistics --- 00:09:28.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.187 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:28.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:28.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:09:28.187 00:09:28.187 --- 10.0.0.3 ping statistics --- 00:09:28.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.187 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:28.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:28.187 00:09:28.187 --- 10.0.0.1 ping statistics --- 00:09:28.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.187 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=71936 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 71936 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 71936 ']' 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.187 22:04:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:28.445 [2024-07-15 22:04:15.170531] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:09:28.445 [2024-07-15 22:04:15.170669] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.445 [2024-07-15 22:04:15.316057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.445 [2024-07-15 22:04:15.389521] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.445 [2024-07-15 22:04:15.389593] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.445 [2024-07-15 22:04:15.389607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.445 [2024-07-15 22:04:15.389617] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.445 [2024-07-15 22:04:15.389629] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.445 [2024-07-15 22:04:15.389671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.379 22:04:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.379 22:04:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:29.379 22:04:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:29.379 22:04:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:29.379 22:04:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:29.379 22:04:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.379 22:04:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:29.636 [2024-07-15 22:04:16.517514] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.636 22:04:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:29.636 22:04:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:29.636 22:04:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:29.895 Malloc1 00:09:29.895 22:04:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:30.462 Malloc2 00:09:30.462 22:04:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:30.721 22:04:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:31.287 22:04:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.544 [2024-07-15 22:04:18.314579] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.544 22:04:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:31.544 22:04:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 90c537c4-41a9-4128-8676-13c07ae13ada -a 10.0.0.2 -s 4420 -i 4 00:09:31.544 22:04:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.544 22:04:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:31.544 22:04:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.544 22:04:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:31.544 22:04:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:34.073 [ 0]:0x1 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=137cc8f8ba8a4fbb878864fe58a7c0d9 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 137cc8f8ba8a4fbb878864fe58a7c0d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:34.073 [ 0]:0x1 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=137cc8f8ba8a4fbb878864fe58a7c0d9 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 137cc8f8ba8a4fbb878864fe58a7c0d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:34.073 [ 1]:0x2 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4c662f1fc5fd4e79a6f4e00035a7b83d 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4c662f1fc5fd4e79a6f4e00035a7b83d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:34.073 22:04:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.331 22:04:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.589 22:04:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:34.846 22:04:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:34.846 22:04:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 90c537c4-41a9-4128-8676-13c07ae13ada -a 10.0.0.2 -s 4420 -i 4 00:09:35.158 22:04:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:35.158 22:04:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:35.158 22:04:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:35.158 22:04:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:35.158 22:04:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:35.158 22:04:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:37.052 22:04:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:37.310 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:37.310 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.310 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:37.310 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:37.310 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:37.310 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:37.310 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:37.310 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:37.310 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:37.310 [ 0]:0x2 00:09:37.310 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:37.310 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:37.310 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4c662f1fc5fd4e79a6f4e00035a7b83d 00:09:37.310 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4c662f1fc5fd4e79a6f4e00035a7b83d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.310 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:37.567 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:37.567 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:37.567 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:37.567 [ 0]:0x1 00:09:37.567 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:37.567 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:37.824 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=137cc8f8ba8a4fbb878864fe58a7c0d9 00:09:37.824 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 137cc8f8ba8a4fbb878864fe58a7c0d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.824 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:37.824 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:37.824 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:37.824 [ 1]:0x2 00:09:37.824 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:37.824 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:37.824 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4c662f1fc5fd4e79a6f4e00035a7b83d 00:09:37.824 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4c662f1fc5fd4e79a6f4e00035a7b83d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.824 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:38.082 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:38.083 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:38.083 [ 0]:0x2 00:09:38.083 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:38.083 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:38.083 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4c662f1fc5fd4e79a6f4e00035a7b83d 00:09:38.083 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4c662f1fc5fd4e79a6f4e00035a7b83d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:38.083 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:38.083 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:38.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.083 22:04:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:38.648 22:04:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:38.648 22:04:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 90c537c4-41a9-4128-8676-13c07ae13ada -a 10.0.0.2 -s 4420 -i 4 00:09:38.648 22:04:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:38.648 22:04:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:38.648 22:04:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.648 22:04:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:38.648 22:04:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:38.648 22:04:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:40.548 22:04:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:40.548 22:04:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:40.548 22:04:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.548 22:04:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:40.548 22:04:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.548 22:04:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:40.548 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:40.548 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:40.806 [ 0]:0x1 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=137cc8f8ba8a4fbb878864fe58a7c0d9 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 137cc8f8ba8a4fbb878864fe58a7c0d9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:40.806 [ 1]:0x2 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4c662f1fc5fd4e79a6f4e00035a7b83d 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4c662f1fc5fd4e79a6f4e00035a7b83d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:40.806 22:04:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:41.373 [ 0]:0x2 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4c662f1fc5fd4e79a6f4e00035a7b83d 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4c662f1fc5fd4e79a6f4e00035a7b83d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:41.373 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:41.631 [2024-07-15 22:04:28.417678] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:41.631 2024/07/15 22:04:28 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:09:41.631 request: 00:09:41.631 { 00:09:41.631 "method": "nvmf_ns_remove_host", 00:09:41.631 "params": { 00:09:41.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.631 "nsid": 2, 00:09:41.631 "host": "nqn.2016-06.io.spdk:host1" 00:09:41.631 } 00:09:41.631 } 00:09:41.631 Got JSON-RPC error response 00:09:41.631 GoRPCClient: error on JSON-RPC call 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:41.631 [ 0]:0x2 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4c662f1fc5fd4e79a6f4e00035a7b83d 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4c662f1fc5fd4e79a6f4e00035a7b83d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:41.631 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:41.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.889 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=72329 00:09:41.889 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:41.889 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.889 22:04:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 72329 /var/tmp/host.sock 00:09:41.889 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72329 ']' 00:09:41.889 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:41.889 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.889 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:41.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:41.889 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.889 22:04:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:41.889 [2024-07-15 22:04:28.685058] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:09:41.889 [2024-07-15 22:04:28.685992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72329 ] 00:09:41.889 [2024-07-15 22:04:28.826904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.146 [2024-07-15 22:04:28.914580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.099 22:04:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:43.099 22:04:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:43.099 22:04:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.368 22:04:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:43.625 22:04:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 93975998-7a70-48e3-84bd-509fa09294e7 00:09:43.625 22:04:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:43.625 22:04:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 939759987A7048E384BD509FA09294E7 -i 00:09:44.191 22:04:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 778ab7f8-5845-42dd-92ed-de126ff4e002 00:09:44.191 22:04:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:44.191 22:04:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 778AB7F8584542DD92EDDE126FF4E002 -i 00:09:44.448 22:04:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:45.015 22:04:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:45.272 22:04:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:45.272 22:04:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:45.529 nvme0n1 00:09:45.529 22:04:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:45.529 22:04:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:46.093 nvme1n2 00:09:46.093 22:04:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:46.093 22:04:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:46.093 22:04:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:46.093 22:04:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:46.093 22:04:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:46.350 22:04:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:46.350 22:04:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:46.350 22:04:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:46.350 22:04:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:46.350 22:04:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 93975998-7a70-48e3-84bd-509fa09294e7 == \9\3\9\7\5\9\9\8\-\7\a\7\0\-\4\8\e\3\-\8\4\b\d\-\5\0\9\f\a\0\9\2\9\4\e\7 ]] 00:09:46.350 22:04:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:46.350 22:04:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:46.350 22:04:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:46.916 22:04:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 778ab7f8-5845-42dd-92ed-de126ff4e002 == \7\7\8\a\b\7\f\8\-\5\8\4\5\-\4\2\d\d\-\9\2\e\d\-\d\e\1\2\6\f\f\4\e\0\0\2 ]] 00:09:46.916 22:04:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 72329 00:09:46.916 22:04:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72329 ']' 00:09:46.916 22:04:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72329 00:09:46.916 22:04:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:46.916 22:04:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:46.916 22:04:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72329 00:09:46.916 killing process with pid 72329 00:09:46.916 22:04:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:46.916 22:04:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:46.916 22:04:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72329' 00:09:46.916 22:04:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72329 00:09:46.916 22:04:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72329 00:09:47.174 22:04:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.431 22:04:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:47.431 22:04:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:47.431 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:47.431 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:47.689 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:47.689 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:47.689 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:47.689 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:47.689 rmmod nvme_tcp 00:09:47.689 rmmod nvme_fabrics 00:09:47.689 rmmod nvme_keyring 00:09:47.689 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:47.689 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:47.689 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:47.690 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 71936 ']' 00:09:47.690 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 71936 00:09:47.690 22:04:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 71936 ']' 00:09:47.690 22:04:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 71936 00:09:47.690 22:04:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:47.690 22:04:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:47.690 22:04:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71936 00:09:47.690 killing process with pid 71936 00:09:47.690 22:04:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:47.690 22:04:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:47.690 22:04:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71936' 00:09:47.690 22:04:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 71936 00:09:47.690 22:04:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 71936 00:09:47.948 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:47.948 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:47.948 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:47.948 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.948 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:47.948 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.948 22:04:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.948 22:04:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.948 22:04:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:47.948 00:09:47.948 real 0m20.080s 00:09:47.948 user 0m33.508s 00:09:47.948 sys 0m2.820s 00:09:47.948 22:04:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.948 22:04:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:47.948 ************************************ 00:09:47.948 END TEST nvmf_ns_masking 00:09:47.948 ************************************ 00:09:47.948 22:04:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:47.948 22:04:34 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:09:47.948 22:04:34 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:09:47.948 22:04:34 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:47.948 22:04:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:47.948 22:04:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.948 22:04:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.948 ************************************ 00:09:47.948 START TEST nvmf_host_management 00:09:47.948 ************************************ 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:47.948 * Looking for test storage... 00:09:47.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:47.948 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:47.949 Cannot find device "nvmf_tgt_br" 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.949 Cannot find device "nvmf_tgt_br2" 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:47.949 Cannot find device "nvmf_tgt_br" 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:09:47.949 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:48.207 Cannot find device "nvmf_tgt_br2" 00:09:48.207 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:09:48.207 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:48.207 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:48.207 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:48.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:48.207 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:48.207 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:48.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:48.207 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:48.207 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:48.207 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:48.207 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:48.207 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:48.207 22:04:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:48.207 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:48.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:09:48.465 00:09:48.465 --- 10.0.0.2 ping statistics --- 00:09:48.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.465 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:48.465 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:48.465 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:09:48.465 00:09:48.465 --- 10.0.0.3 ping statistics --- 00:09:48.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.465 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:48.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:09:48.465 00:09:48.465 --- 10.0.0.1 ping statistics --- 00:09:48.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.465 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=72700 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 72700 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72700 ']' 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:48.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:48.465 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:48.465 [2024-07-15 22:04:35.263862] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:09:48.465 [2024-07-15 22:04:35.263953] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.465 [2024-07-15 22:04:35.400684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:48.723 [2024-07-15 22:04:35.465057] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.723 [2024-07-15 22:04:35.465340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.723 [2024-07-15 22:04:35.465504] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.723 [2024-07-15 22:04:35.465631] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.723 [2024-07-15 22:04:35.465761] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.723 [2024-07-15 22:04:35.466002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.723 [2024-07-15 22:04:35.466147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.723 [2024-07-15 22:04:35.466464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:48.723 [2024-07-15 22:04:35.466480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:48.723 [2024-07-15 22:04:35.595547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.723 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:48.723 Malloc0 00:09:48.723 [2024-07-15 22:04:35.660831] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=72754 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 72754 /var/tmp/bdevperf.sock 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72754 ']' 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:48.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:48.982 { 00:09:48.982 "params": { 00:09:48.982 "name": "Nvme$subsystem", 00:09:48.982 "trtype": "$TEST_TRANSPORT", 00:09:48.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:48.982 "adrfam": "ipv4", 00:09:48.982 "trsvcid": "$NVMF_PORT", 00:09:48.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:48.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:48.982 "hdgst": ${hdgst:-false}, 00:09:48.982 "ddgst": ${ddgst:-false} 00:09:48.982 }, 00:09:48.982 "method": "bdev_nvme_attach_controller" 00:09:48.982 } 00:09:48.982 EOF 00:09:48.982 )") 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:48.982 22:04:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:48.982 "params": { 00:09:48.982 "name": "Nvme0", 00:09:48.982 "trtype": "tcp", 00:09:48.982 "traddr": "10.0.0.2", 00:09:48.982 "adrfam": "ipv4", 00:09:48.982 "trsvcid": "4420", 00:09:48.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:48.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:48.982 "hdgst": false, 00:09:48.982 "ddgst": false 00:09:48.982 }, 00:09:48.982 "method": "bdev_nvme_attach_controller" 00:09:48.982 }' 00:09:48.982 [2024-07-15 22:04:35.771860] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:09:48.982 [2024-07-15 22:04:35.771993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72754 ] 00:09:48.982 [2024-07-15 22:04:35.911259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.240 [2024-07-15 22:04:35.999250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.240 Running I/O for 10 seconds... 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=37 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 37 -ge 100 ']' 00:09:49.497 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.756 [2024-07-15 22:04:36.605950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2415310 is same with the state(5) to be set 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.756 22:04:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:49.756 [2024-07-15 22:04:36.623579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:49.756 [2024-07-15 22:04:36.623650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.756 [2024-07-15 22:04:36.623673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:49.756 [2024-07-15 22:04:36.623688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.756 [2024-07-15 22:04:36.623707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:49.756 [2024-07-15 22:04:36.623721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.756 [2024-07-15 22:04:36.623738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:49.756 [2024-07-15 22:04:36.623753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.756 [2024-07-15 22:04:36.623768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd3af0 is same with the state(5) to be set 00:09:49.756 [2024-07-15 22:04:36.672602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd3af0 (9): Bad file descriptor 00:09:49.756 [2024-07-15 22:04:36.672898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.756 [2024-07-15 22:04:36.672923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.756 [2024-07-15 22:04:36.672959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.756 [2024-07-15 22:04:36.672974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.756 [2024-07-15 22:04:36.672999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.756 [2024-07-15 22:04:36.673016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.756 [2024-07-15 22:04:36.673038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.756 [2024-07-15 22:04:36.673054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.756 [2024-07-15 22:04:36.673076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.756 [2024-07-15 22:04:36.673106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.756 [2024-07-15 22:04:36.673130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.673969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.673985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.674967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.757 [2024-07-15 22:04:36.674981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.757 [2024-07-15 22:04:36.675000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.758 [2024-07-15 22:04:36.675015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.758 [2024-07-15 22:04:36.675035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.758 [2024-07-15 22:04:36.675048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.758 [2024-07-15 22:04:36.675069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.758 [2024-07-15 22:04:36.675094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.758 [2024-07-15 22:04:36.675119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.758 [2024-07-15 22:04:36.675133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.758 [2024-07-15 22:04:36.675154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.758 [2024-07-15 22:04:36.675168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.758 [2024-07-15 22:04:36.675188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.758 [2024-07-15 22:04:36.675202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.758 [2024-07-15 22:04:36.675223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:49.758 [2024-07-15 22:04:36.675236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:49.758 [2024-07-15 22:04:36.675340] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbd3820 was disconnected and freed. reset controller. 00:09:49.758 [2024-07-15 22:04:36.679679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:49.758 task offset: 49152 on job bdev=Nvme0n1 fails 00:09:49.758 00:09:49.758 Latency(us) 00:09:49.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.758 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:49.758 Job: Nvme0n1 ended in about 0.52 seconds with error 00:09:49.758 Verification LBA range: start 0x0 length 0x400 00:09:49.758 Nvme0n1 : 0.52 736.90 46.06 122.82 0.00 71933.39 3783.21 81979.58 00:09:49.758 =================================================================================================================== 00:09:49.758 Total : 736.90 46.06 122.82 0.00 71933.39 3783.21 81979.58 00:09:49.758 [2024-07-15 22:04:36.686633] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:49.758 [2024-07-15 22:04:36.695720] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:50.688 22:04:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 72754 00:09:50.688 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72754) - No such process 00:09:50.688 22:04:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:50.688 22:04:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:50.688 22:04:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:50.688 22:04:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:50.688 22:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:50.688 22:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:50.688 22:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:50.688 22:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:50.688 { 00:09:50.688 "params": { 00:09:50.688 "name": "Nvme$subsystem", 00:09:50.688 "trtype": "$TEST_TRANSPORT", 00:09:50.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.688 "adrfam": "ipv4", 00:09:50.688 "trsvcid": "$NVMF_PORT", 00:09:50.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.688 "hdgst": ${hdgst:-false}, 00:09:50.688 "ddgst": ${ddgst:-false} 00:09:50.688 }, 00:09:50.688 "method": "bdev_nvme_attach_controller" 00:09:50.688 } 00:09:50.688 EOF 00:09:50.688 )") 00:09:50.688 22:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:50.688 22:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:50.688 22:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:50.688 22:04:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:50.688 "params": { 00:09:50.688 "name": "Nvme0", 00:09:50.688 "trtype": "tcp", 00:09:50.688 "traddr": "10.0.0.2", 00:09:50.688 "adrfam": "ipv4", 00:09:50.688 "trsvcid": "4420", 00:09:50.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:50.688 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:50.688 "hdgst": false, 00:09:50.688 "ddgst": false 00:09:50.688 }, 00:09:50.688 "method": "bdev_nvme_attach_controller" 00:09:50.688 }' 00:09:50.946 [2024-07-15 22:04:37.690251] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:09:50.946 [2024-07-15 22:04:37.690370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72800 ] 00:09:50.946 [2024-07-15 22:04:37.858911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.202 [2024-07-15 22:04:37.947261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.202 Running I/O for 1 seconds... 00:09:52.571 00:09:52.571 Latency(us) 00:09:52.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.571 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:52.571 Verification LBA range: start 0x0 length 0x400 00:09:52.571 Nvme0n1 : 1.03 723.46 45.22 0.00 0.00 84322.41 5898.24 79596.45 00:09:52.571 =================================================================================================================== 00:09:52.571 Total : 723.46 45.22 0.00 0.00 84322.41 5898.24 79596.45 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.571 rmmod nvme_tcp 00:09:52.571 rmmod nvme_fabrics 00:09:52.571 rmmod nvme_keyring 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 72700 ']' 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 72700 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 72700 ']' 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 72700 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72700 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:52.571 22:04:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:52.571 killing process with pid 72700 00:09:52.572 22:04:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72700' 00:09:52.572 22:04:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 72700 00:09:52.572 22:04:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 72700 00:09:52.849 [2024-07-15 22:04:39.687485] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:52.849 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:52.849 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:52.849 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:52.849 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:52.849 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:52.849 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.849 22:04:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.849 22:04:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.849 22:04:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:52.849 22:04:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:52.849 00:09:52.849 real 0m5.020s 00:09:52.849 user 0m19.376s 00:09:52.849 sys 0m1.129s 00:09:52.849 22:04:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.849 22:04:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.849 ************************************ 00:09:52.849 END TEST nvmf_host_management 00:09:52.849 ************************************ 00:09:53.126 22:04:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:53.126 22:04:39 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:53.126 22:04:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:53.126 22:04:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.126 22:04:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:53.126 ************************************ 00:09:53.126 START TEST nvmf_lvol 00:09:53.126 ************************************ 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:53.126 * Looking for test storage... 00:09:53.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.126 22:04:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:53.127 Cannot find device "nvmf_tgt_br" 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.127 Cannot find device "nvmf_tgt_br2" 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:53.127 Cannot find device "nvmf_tgt_br" 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:53.127 Cannot find device "nvmf_tgt_br2" 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:09:53.127 22:04:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:53.127 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:53.127 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.127 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:53.127 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.127 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:53.127 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:53.127 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:53.127 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:53.127 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:53.127 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:53.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:53.385 00:09:53.385 --- 10.0.0.2 ping statistics --- 00:09:53.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.385 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:53.385 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:53.385 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:09:53.385 00:09:53.385 --- 10.0.0.3 ping statistics --- 00:09:53.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.385 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:53.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:09:53.385 00:09:53.385 --- 10.0.0.1 ping statistics --- 00:09:53.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.385 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=73011 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 73011 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 73011 ']' 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.385 22:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:53.642 [2024-07-15 22:04:40.344121] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:09:53.642 [2024-07-15 22:04:40.344272] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.642 [2024-07-15 22:04:40.494283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:53.642 [2024-07-15 22:04:40.556238] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.642 [2024-07-15 22:04:40.556300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.642 [2024-07-15 22:04:40.556327] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.642 [2024-07-15 22:04:40.556342] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.642 [2024-07-15 22:04:40.556353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.642 [2024-07-15 22:04:40.556510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.642 [2024-07-15 22:04:40.556691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.642 [2024-07-15 22:04:40.556708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.900 22:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.900 22:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:09:53.900 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.900 22:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:53.900 22:04:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:53.900 22:04:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.900 22:04:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:54.465 [2024-07-15 22:04:41.121535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.465 22:04:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.723 22:04:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:54.723 22:04:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.289 22:04:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:55.289 22:04:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:55.547 22:04:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:55.804 22:04:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=37372be1-e80a-4845-8310-bfaee6672680 00:09:55.804 22:04:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 37372be1-e80a-4845-8310-bfaee6672680 lvol 20 00:09:56.061 22:04:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=064c1123-70ed-4f15-b92b-a9fae5ef776f 00:09:56.061 22:04:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:56.318 22:04:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 064c1123-70ed-4f15-b92b-a9fae5ef776f 00:09:56.880 22:04:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:57.184 [2024-07-15 22:04:43.893768] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.184 22:04:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:57.442 22:04:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73156 00:09:57.442 22:04:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:57.442 22:04:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:58.814 22:04:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 064c1123-70ed-4f15-b92b-a9fae5ef776f MY_SNAPSHOT 00:09:59.072 22:04:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4ab61331-0c66-42fb-85e7-bdb6f4e72c0c 00:09:59.072 22:04:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 064c1123-70ed-4f15-b92b-a9fae5ef776f 30 00:09:59.638 22:04:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 4ab61331-0c66-42fb-85e7-bdb6f4e72c0c MY_CLONE 00:09:59.896 22:04:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a61493e8-9889-48ad-935c-bd7608ba040a 00:09:59.896 22:04:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate a61493e8-9889-48ad-935c-bd7608ba040a 00:10:00.829 22:04:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73156 00:10:08.933 Initializing NVMe Controllers 00:10:08.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:08.933 Controller IO queue size 128, less than required. 00:10:08.933 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:08.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:08.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:08.933 Initialization complete. Launching workers. 00:10:08.933 ======================================================== 00:10:08.933 Latency(us) 00:10:08.933 Device Information : IOPS MiB/s Average min max 00:10:08.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9675.58 37.80 13233.35 404.35 88678.89 00:10:08.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8589.60 33.55 14907.71 3185.33 60805.38 00:10:08.933 ======================================================== 00:10:08.933 Total : 18265.18 71.35 14020.75 404.35 88678.89 00:10:08.933 00:10:08.933 22:04:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:08.933 22:04:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 064c1123-70ed-4f15-b92b-a9fae5ef776f 00:10:08.933 22:04:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37372be1-e80a-4845-8310-bfaee6672680 00:10:09.191 22:04:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:09.191 22:04:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:09.191 22:04:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:09.191 22:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.191 22:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:09.191 22:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.191 22:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:09.191 22:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.191 22:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.191 rmmod nvme_tcp 00:10:09.191 rmmod nvme_fabrics 00:10:09.191 rmmod nvme_keyring 00:10:09.191 22:04:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.191 22:04:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:09.191 22:04:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:09.191 22:04:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 73011 ']' 00:10:09.191 22:04:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 73011 00:10:09.191 22:04:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 73011 ']' 00:10:09.191 22:04:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 73011 00:10:09.191 22:04:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:10:09.191 22:04:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:09.191 22:04:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73011 00:10:09.191 22:04:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:09.191 22:04:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:09.191 killing process with pid 73011 00:10:09.191 22:04:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73011' 00:10:09.191 22:04:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 73011 00:10:09.191 22:04:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 73011 00:10:09.448 22:04:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.448 22:04:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.448 22:04:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.448 22:04:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.448 22:04:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.448 22:04:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.448 22:04:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.448 22:04:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.448 22:04:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:09.448 00:10:09.448 real 0m16.502s 00:10:09.448 user 1m8.708s 00:10:09.448 sys 0m4.535s 00:10:09.448 22:04:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.448 22:04:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:09.448 ************************************ 00:10:09.448 END TEST nvmf_lvol 00:10:09.448 ************************************ 00:10:09.448 22:04:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:09.448 22:04:56 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:09.448 22:04:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:09.448 22:04:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.448 22:04:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:09.448 ************************************ 00:10:09.448 START TEST nvmf_lvs_grow 00:10:09.448 ************************************ 00:10:09.448 22:04:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:09.706 * Looking for test storage... 00:10:09.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:09.706 22:04:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:09.706 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:09.706 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.706 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.706 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.706 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.706 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.706 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.706 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.706 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.706 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.706 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.706 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:10:09.706 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:09.707 Cannot find device "nvmf_tgt_br" 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:09.707 Cannot find device "nvmf_tgt_br2" 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:09.707 Cannot find device "nvmf_tgt_br" 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:09.707 Cannot find device "nvmf_tgt_br2" 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:09.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:09.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:09.707 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:09.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:10:09.967 00:10:09.967 --- 10.0.0.2 ping statistics --- 00:10:09.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.967 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:09.967 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:09.967 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:10:09.967 00:10:09.967 --- 10.0.0.3 ping statistics --- 00:10:09.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.967 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:09.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:10:09.967 00:10:09.967 --- 10.0.0.1 ping statistics --- 00:10:09.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.967 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=73517 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 73517 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 73517 ']' 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.967 22:04:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:09.968 [2024-07-15 22:04:56.909696] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:10:09.968 [2024-07-15 22:04:56.909790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.224 [2024-07-15 22:04:57.046627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.225 [2024-07-15 22:04:57.132234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.225 [2024-07-15 22:04:57.132313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.225 [2024-07-15 22:04:57.132330] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.225 [2024-07-15 22:04:57.132341] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.225 [2024-07-15 22:04:57.132352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.225 [2024-07-15 22:04:57.132392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.481 22:04:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:10.481 22:04:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:10:10.481 22:04:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:10.481 22:04:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:10.481 22:04:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:10.481 22:04:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.482 22:04:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:10.739 [2024-07-15 22:04:57.542940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.739 22:04:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:10.739 22:04:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:10.739 22:04:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.739 22:04:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:10.739 ************************************ 00:10:10.739 START TEST lvs_grow_clean 00:10:10.739 ************************************ 00:10:10.739 22:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:10:10.739 22:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:10.739 22:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:10.739 22:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:10.739 22:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:10.739 22:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:10.739 22:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:10.739 22:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:10.739 22:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:10.739 22:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:10.996 22:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:10.996 22:04:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:11.559 22:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ef4aa5ec-f0c9-4232-b415-16e6d29181bc 00:10:11.559 22:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4aa5ec-f0c9-4232-b415-16e6d29181bc 00:10:11.559 22:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:11.815 22:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:11.815 22:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:11.815 22:04:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef4aa5ec-f0c9-4232-b415-16e6d29181bc lvol 150 00:10:12.377 22:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=450eafb7-c672-49e0-a24a-3b210a651cb3 00:10:12.377 22:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:12.377 22:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:12.634 [2024-07-15 22:04:59.473191] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:12.634 [2024-07-15 22:04:59.473282] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:12.634 true 00:10:12.634 22:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4aa5ec-f0c9-4232-b415-16e6d29181bc 00:10:12.634 22:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:12.891 22:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:12.891 22:04:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:13.455 22:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 450eafb7-c672-49e0-a24a-3b210a651cb3 00:10:13.713 22:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:13.971 [2024-07-15 22:05:00.842479] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.971 22:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:14.537 22:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73676 00:10:14.537 22:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:14.537 22:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:14.537 22:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73676 /var/tmp/bdevperf.sock 00:10:14.537 22:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 73676 ']' 00:10:14.537 22:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:14.537 22:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:14.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:14.537 22:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:14.537 22:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:14.537 22:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:14.537 [2024-07-15 22:05:01.305613] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:10:14.537 [2024-07-15 22:05:01.305761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73676 ] 00:10:14.537 [2024-07-15 22:05:01.445956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.795 [2024-07-15 22:05:01.530092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.795 22:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:14.795 22:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:10:14.795 22:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:15.053 Nvme0n1 00:10:15.312 22:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:15.570 [ 00:10:15.570 { 00:10:15.570 "aliases": [ 00:10:15.570 "450eafb7-c672-49e0-a24a-3b210a651cb3" 00:10:15.570 ], 00:10:15.570 "assigned_rate_limits": { 00:10:15.570 "r_mbytes_per_sec": 0, 00:10:15.570 "rw_ios_per_sec": 0, 00:10:15.570 "rw_mbytes_per_sec": 0, 00:10:15.570 "w_mbytes_per_sec": 0 00:10:15.570 }, 00:10:15.570 "block_size": 4096, 00:10:15.570 "claimed": false, 00:10:15.570 "driver_specific": { 00:10:15.570 "mp_policy": "active_passive", 00:10:15.570 "nvme": [ 00:10:15.570 { 00:10:15.570 "ctrlr_data": { 00:10:15.570 "ana_reporting": false, 00:10:15.570 "cntlid": 1, 00:10:15.570 "firmware_revision": "24.09", 00:10:15.570 "model_number": "SPDK bdev Controller", 00:10:15.570 "multi_ctrlr": true, 00:10:15.570 "oacs": { 00:10:15.570 "firmware": 0, 00:10:15.570 "format": 0, 00:10:15.570 "ns_manage": 0, 00:10:15.570 "security": 0 00:10:15.570 }, 00:10:15.570 "serial_number": "SPDK0", 00:10:15.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:15.570 "vendor_id": "0x8086" 00:10:15.570 }, 00:10:15.570 "ns_data": { 00:10:15.570 "can_share": true, 00:10:15.570 "id": 1 00:10:15.570 }, 00:10:15.570 "trid": { 00:10:15.570 "adrfam": "IPv4", 00:10:15.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:15.570 "traddr": "10.0.0.2", 00:10:15.570 "trsvcid": "4420", 00:10:15.570 "trtype": "TCP" 00:10:15.570 }, 00:10:15.570 "vs": { 00:10:15.570 "nvme_version": "1.3" 00:10:15.570 } 00:10:15.570 } 00:10:15.570 ] 00:10:15.570 }, 00:10:15.570 "memory_domains": [ 00:10:15.570 { 00:10:15.570 "dma_device_id": "system", 00:10:15.570 "dma_device_type": 1 00:10:15.570 } 00:10:15.570 ], 00:10:15.570 "name": "Nvme0n1", 00:10:15.570 "num_blocks": 38912, 00:10:15.570 "product_name": "NVMe disk", 00:10:15.570 "supported_io_types": { 00:10:15.570 "abort": true, 00:10:15.570 "compare": true, 00:10:15.570 "compare_and_write": true, 00:10:15.570 "copy": true, 00:10:15.570 "flush": true, 00:10:15.570 "get_zone_info": false, 00:10:15.570 "nvme_admin": true, 00:10:15.570 "nvme_io": true, 00:10:15.570 "nvme_io_md": false, 00:10:15.570 "nvme_iov_md": false, 00:10:15.570 "read": true, 00:10:15.570 "reset": true, 00:10:15.570 "seek_data": false, 00:10:15.570 "seek_hole": false, 00:10:15.570 "unmap": true, 00:10:15.570 "write": true, 00:10:15.570 "write_zeroes": true, 00:10:15.570 "zcopy": false, 00:10:15.570 "zone_append": false, 00:10:15.570 "zone_management": false 00:10:15.570 }, 00:10:15.570 "uuid": "450eafb7-c672-49e0-a24a-3b210a651cb3", 00:10:15.570 "zoned": false 00:10:15.570 } 00:10:15.570 ] 00:10:15.570 22:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73710 00:10:15.570 22:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:15.570 22:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:15.828 Running I/O for 10 seconds... 00:10:16.762 Latency(us) 00:10:16.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.762 Nvme0n1 : 1.00 5983.00 23.37 0.00 0.00 0.00 0.00 0.00 00:10:16.762 =================================================================================================================== 00:10:16.762 Total : 5983.00 23.37 0.00 0.00 0.00 0.00 0.00 00:10:16.762 00:10:17.695 22:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ef4aa5ec-f0c9-4232-b415-16e6d29181bc 00:10:17.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.695 Nvme0n1 : 2.00 5606.50 21.90 0.00 0.00 0.00 0.00 0.00 00:10:17.695 =================================================================================================================== 00:10:17.695 Total : 5606.50 21.90 0.00 0.00 0.00 0.00 0.00 00:10:17.695 00:10:18.260 true 00:10:18.260 22:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4aa5ec-f0c9-4232-b415-16e6d29181bc 00:10:18.260 22:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:18.517 22:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:18.517 22:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:18.517 22:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 73710 00:10:18.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.776 Nvme0n1 : 3.00 5540.67 21.64 0.00 0.00 0.00 0.00 0.00 00:10:18.776 =================================================================================================================== 00:10:18.776 Total : 5540.67 21.64 0.00 0.00 0.00 0.00 0.00 00:10:18.776 00:10:19.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.783 Nvme0n1 : 4.00 5995.75 23.42 0.00 0.00 0.00 0.00 0.00 00:10:19.783 =================================================================================================================== 00:10:19.783 Total : 5995.75 23.42 0.00 0.00 0.00 0.00 0.00 00:10:19.783 00:10:20.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.716 Nvme0n1 : 5.00 6294.60 24.59 0.00 0.00 0.00 0.00 0.00 00:10:20.716 =================================================================================================================== 00:10:20.716 Total : 6294.60 24.59 0.00 0.00 0.00 0.00 0.00 00:10:20.716 00:10:22.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.085 Nvme0n1 : 6.00 6450.00 25.20 0.00 0.00 0.00 0.00 0.00 00:10:22.085 =================================================================================================================== 00:10:22.085 Total : 6450.00 25.20 0.00 0.00 0.00 0.00 0.00 00:10:22.085 00:10:23.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.015 Nvme0n1 : 7.00 6471.86 25.28 0.00 0.00 0.00 0.00 0.00 00:10:23.015 =================================================================================================================== 00:10:23.015 Total : 6471.86 25.28 0.00 0.00 0.00 0.00 0.00 00:10:23.015 00:10:23.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.949 Nvme0n1 : 8.00 6540.25 25.55 0.00 0.00 0.00 0.00 0.00 00:10:23.949 =================================================================================================================== 00:10:23.949 Total : 6540.25 25.55 0.00 0.00 0.00 0.00 0.00 00:10:23.949 00:10:24.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.882 Nvme0n1 : 9.00 6599.78 25.78 0.00 0.00 0.00 0.00 0.00 00:10:24.882 =================================================================================================================== 00:10:24.882 Total : 6599.78 25.78 0.00 0.00 0.00 0.00 0.00 00:10:24.882 00:10:25.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.817 Nvme0n1 : 10.00 6617.40 25.85 0.00 0.00 0.00 0.00 0.00 00:10:25.817 =================================================================================================================== 00:10:25.817 Total : 6617.40 25.85 0.00 0.00 0.00 0.00 0.00 00:10:25.817 00:10:25.817 00:10:25.817 Latency(us) 00:10:25.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.817 Nvme0n1 : 10.01 6623.36 25.87 0.00 0.00 19319.18 7179.17 51713.86 00:10:25.817 =================================================================================================================== 00:10:25.817 Total : 6623.36 25.87 0.00 0.00 19319.18 7179.17 51713.86 00:10:25.817 0 00:10:25.817 22:05:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73676 00:10:25.817 22:05:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 73676 ']' 00:10:25.817 22:05:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 73676 00:10:25.817 22:05:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:10:25.817 22:05:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.817 22:05:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73676 00:10:25.817 killing process with pid 73676 00:10:25.817 Received shutdown signal, test time was about 10.000000 seconds 00:10:25.817 00:10:25.817 Latency(us) 00:10:25.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.817 =================================================================================================================== 00:10:25.817 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:25.817 22:05:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:25.817 22:05:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:25.817 22:05:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73676' 00:10:25.817 22:05:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 73676 00:10:25.817 22:05:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 73676 00:10:26.075 22:05:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:26.334 22:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:26.593 22:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4aa5ec-f0c9-4232-b415-16e6d29181bc 00:10:26.593 22:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:26.851 22:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:26.851 22:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:26.851 22:05:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:27.417 [2024-07-15 22:05:14.113791] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:27.417 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4aa5ec-f0c9-4232-b415-16e6d29181bc 00:10:27.417 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:10:27.417 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4aa5ec-f0c9-4232-b415-16e6d29181bc 00:10:27.417 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.417 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.417 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.417 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.417 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.417 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.417 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.417 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:27.417 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4aa5ec-f0c9-4232-b415-16e6d29181bc 00:10:27.676 2024/07/15 22:05:14 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:ef4aa5ec-f0c9-4232-b415-16e6d29181bc], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:27.676 request: 00:10:27.676 { 00:10:27.676 "method": "bdev_lvol_get_lvstores", 00:10:27.676 "params": { 00:10:27.676 "uuid": "ef4aa5ec-f0c9-4232-b415-16e6d29181bc" 00:10:27.676 } 00:10:27.676 } 00:10:27.676 Got JSON-RPC error response 00:10:27.676 GoRPCClient: error on JSON-RPC call 00:10:27.676 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:10:27.676 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:27.676 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:27.676 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:27.676 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:27.934 aio_bdev 00:10:27.934 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 450eafb7-c672-49e0-a24a-3b210a651cb3 00:10:27.934 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=450eafb7-c672-49e0-a24a-3b210a651cb3 00:10:27.934 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:27.934 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:10:27.935 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:27.935 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:27.935 22:05:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:28.501 22:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 450eafb7-c672-49e0-a24a-3b210a651cb3 -t 2000 00:10:28.501 [ 00:10:28.501 { 00:10:28.501 "aliases": [ 00:10:28.501 "lvs/lvol" 00:10:28.501 ], 00:10:28.501 "assigned_rate_limits": { 00:10:28.501 "r_mbytes_per_sec": 0, 00:10:28.501 "rw_ios_per_sec": 0, 00:10:28.501 "rw_mbytes_per_sec": 0, 00:10:28.501 "w_mbytes_per_sec": 0 00:10:28.501 }, 00:10:28.501 "block_size": 4096, 00:10:28.501 "claimed": false, 00:10:28.501 "driver_specific": { 00:10:28.501 "lvol": { 00:10:28.501 "base_bdev": "aio_bdev", 00:10:28.501 "clone": false, 00:10:28.501 "esnap_clone": false, 00:10:28.501 "lvol_store_uuid": "ef4aa5ec-f0c9-4232-b415-16e6d29181bc", 00:10:28.501 "num_allocated_clusters": 38, 00:10:28.501 "snapshot": false, 00:10:28.501 "thin_provision": false 00:10:28.501 } 00:10:28.501 }, 00:10:28.501 "name": "450eafb7-c672-49e0-a24a-3b210a651cb3", 00:10:28.501 "num_blocks": 38912, 00:10:28.501 "product_name": "Logical Volume", 00:10:28.501 "supported_io_types": { 00:10:28.501 "abort": false, 00:10:28.501 "compare": false, 00:10:28.501 "compare_and_write": false, 00:10:28.501 "copy": false, 00:10:28.501 "flush": false, 00:10:28.501 "get_zone_info": false, 00:10:28.501 "nvme_admin": false, 00:10:28.501 "nvme_io": false, 00:10:28.501 "nvme_io_md": false, 00:10:28.501 "nvme_iov_md": false, 00:10:28.501 "read": true, 00:10:28.501 "reset": true, 00:10:28.501 "seek_data": true, 00:10:28.501 "seek_hole": true, 00:10:28.501 "unmap": true, 00:10:28.501 "write": true, 00:10:28.501 "write_zeroes": true, 00:10:28.501 "zcopy": false, 00:10:28.502 "zone_append": false, 00:10:28.502 "zone_management": false 00:10:28.502 }, 00:10:28.502 "uuid": "450eafb7-c672-49e0-a24a-3b210a651cb3", 00:10:28.502 "zoned": false 00:10:28.502 } 00:10:28.502 ] 00:10:28.502 22:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:10:28.502 22:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4aa5ec-f0c9-4232-b415-16e6d29181bc 00:10:28.502 22:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:29.067 22:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:29.067 22:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4aa5ec-f0c9-4232-b415-16e6d29181bc 00:10:29.067 22:05:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:29.325 22:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:29.325 22:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 450eafb7-c672-49e0-a24a-3b210a651cb3 00:10:29.584 22:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef4aa5ec-f0c9-4232-b415-16e6d29181bc 00:10:29.842 22:05:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:30.429 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:30.687 ************************************ 00:10:30.687 END TEST lvs_grow_clean 00:10:30.687 ************************************ 00:10:30.687 00:10:30.687 real 0m19.942s 00:10:30.687 user 0m19.371s 00:10:30.687 sys 0m2.404s 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:30.687 ************************************ 00:10:30.687 START TEST lvs_grow_dirty 00:10:30.687 ************************************ 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:30.687 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:30.688 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:30.688 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:30.946 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:30.946 22:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:31.512 22:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=894911d3-8fea-40ee-807e-ef4eed54ab5b 00:10:31.512 22:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894911d3-8fea-40ee-807e-ef4eed54ab5b 00:10:31.512 22:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:31.778 22:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:31.778 22:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:31.778 22:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 894911d3-8fea-40ee-807e-ef4eed54ab5b lvol 150 00:10:32.047 22:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=18c180fc-ace2-42a5-a4b4-e4538c8c784c 00:10:32.047 22:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:32.047 22:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:32.304 [2024-07-15 22:05:19.101098] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:32.304 [2024-07-15 22:05:19.101187] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:32.304 true 00:10:32.304 22:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894911d3-8fea-40ee-807e-ef4eed54ab5b 00:10:32.304 22:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:32.562 22:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:32.562 22:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:32.819 22:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 18c180fc-ace2-42a5-a4b4-e4538c8c784c 00:10:33.076 22:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:33.333 [2024-07-15 22:05:20.097650] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.334 22:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:33.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:33.591 22:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74120 00:10:33.591 22:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:33.591 22:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:33.591 22:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74120 /var/tmp/bdevperf.sock 00:10:33.592 22:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74120 ']' 00:10:33.592 22:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:33.592 22:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:33.592 22:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:33.592 22:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:33.592 22:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:33.850 [2024-07-15 22:05:20.567607] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:10:33.850 [2024-07-15 22:05:20.567750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74120 ] 00:10:33.850 [2024-07-15 22:05:20.707234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.850 [2024-07-15 22:05:20.793581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.785 22:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:34.785 22:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:34.785 22:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:35.043 Nvme0n1 00:10:35.043 22:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:35.302 [ 00:10:35.302 { 00:10:35.302 "aliases": [ 00:10:35.302 "18c180fc-ace2-42a5-a4b4-e4538c8c784c" 00:10:35.302 ], 00:10:35.302 "assigned_rate_limits": { 00:10:35.302 "r_mbytes_per_sec": 0, 00:10:35.302 "rw_ios_per_sec": 0, 00:10:35.302 "rw_mbytes_per_sec": 0, 00:10:35.302 "w_mbytes_per_sec": 0 00:10:35.302 }, 00:10:35.302 "block_size": 4096, 00:10:35.302 "claimed": false, 00:10:35.302 "driver_specific": { 00:10:35.302 "mp_policy": "active_passive", 00:10:35.302 "nvme": [ 00:10:35.302 { 00:10:35.302 "ctrlr_data": { 00:10:35.302 "ana_reporting": false, 00:10:35.302 "cntlid": 1, 00:10:35.302 "firmware_revision": "24.09", 00:10:35.302 "model_number": "SPDK bdev Controller", 00:10:35.302 "multi_ctrlr": true, 00:10:35.302 "oacs": { 00:10:35.302 "firmware": 0, 00:10:35.302 "format": 0, 00:10:35.302 "ns_manage": 0, 00:10:35.302 "security": 0 00:10:35.302 }, 00:10:35.302 "serial_number": "SPDK0", 00:10:35.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:35.302 "vendor_id": "0x8086" 00:10:35.302 }, 00:10:35.302 "ns_data": { 00:10:35.302 "can_share": true, 00:10:35.302 "id": 1 00:10:35.302 }, 00:10:35.302 "trid": { 00:10:35.302 "adrfam": "IPv4", 00:10:35.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:35.302 "traddr": "10.0.0.2", 00:10:35.302 "trsvcid": "4420", 00:10:35.302 "trtype": "TCP" 00:10:35.302 }, 00:10:35.302 "vs": { 00:10:35.302 "nvme_version": "1.3" 00:10:35.302 } 00:10:35.302 } 00:10:35.302 ] 00:10:35.302 }, 00:10:35.302 "memory_domains": [ 00:10:35.302 { 00:10:35.302 "dma_device_id": "system", 00:10:35.302 "dma_device_type": 1 00:10:35.302 } 00:10:35.302 ], 00:10:35.302 "name": "Nvme0n1", 00:10:35.302 "num_blocks": 38912, 00:10:35.302 "product_name": "NVMe disk", 00:10:35.302 "supported_io_types": { 00:10:35.302 "abort": true, 00:10:35.302 "compare": true, 00:10:35.302 "compare_and_write": true, 00:10:35.302 "copy": true, 00:10:35.302 "flush": true, 00:10:35.302 "get_zone_info": false, 00:10:35.302 "nvme_admin": true, 00:10:35.302 "nvme_io": true, 00:10:35.302 "nvme_io_md": false, 00:10:35.302 "nvme_iov_md": false, 00:10:35.302 "read": true, 00:10:35.302 "reset": true, 00:10:35.302 "seek_data": false, 00:10:35.302 "seek_hole": false, 00:10:35.302 "unmap": true, 00:10:35.302 "write": true, 00:10:35.302 "write_zeroes": true, 00:10:35.302 "zcopy": false, 00:10:35.302 "zone_append": false, 00:10:35.302 "zone_management": false 00:10:35.302 }, 00:10:35.302 "uuid": "18c180fc-ace2-42a5-a4b4-e4538c8c784c", 00:10:35.302 "zoned": false 00:10:35.302 } 00:10:35.302 ] 00:10:35.302 22:05:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74168 00:10:35.302 22:05:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:35.302 22:05:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:35.560 Running I/O for 10 seconds... 00:10:36.498 Latency(us) 00:10:36.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.498 Nvme0n1 : 1.00 7447.00 29.09 0.00 0.00 0.00 0.00 0.00 00:10:36.498 =================================================================================================================== 00:10:36.498 Total : 7447.00 29.09 0.00 0.00 0.00 0.00 0.00 00:10:36.498 00:10:37.433 22:05:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 894911d3-8fea-40ee-807e-ef4eed54ab5b 00:10:37.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.433 Nvme0n1 : 2.00 7578.50 29.60 0.00 0.00 0.00 0.00 0.00 00:10:37.433 =================================================================================================================== 00:10:37.433 Total : 7578.50 29.60 0.00 0.00 0.00 0.00 0.00 00:10:37.433 00:10:37.691 true 00:10:37.691 22:05:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894911d3-8fea-40ee-807e-ef4eed54ab5b 00:10:37.691 22:05:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:38.257 22:05:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:38.257 22:05:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:38.257 22:05:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74168 00:10:38.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.521 Nvme0n1 : 3.00 7582.00 29.62 0.00 0.00 0.00 0.00 0.00 00:10:38.521 =================================================================================================================== 00:10:38.521 Total : 7582.00 29.62 0.00 0.00 0.00 0.00 0.00 00:10:38.521 00:10:39.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.464 Nvme0n1 : 4.00 7565.75 29.55 0.00 0.00 0.00 0.00 0.00 00:10:39.464 =================================================================================================================== 00:10:39.464 Total : 7565.75 29.55 0.00 0.00 0.00 0.00 0.00 00:10:39.464 00:10:40.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.397 Nvme0n1 : 5.00 7468.00 29.17 0.00 0.00 0.00 0.00 0.00 00:10:40.397 =================================================================================================================== 00:10:40.397 Total : 7468.00 29.17 0.00 0.00 0.00 0.00 0.00 00:10:40.397 00:10:41.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.769 Nvme0n1 : 6.00 7043.33 27.51 0.00 0.00 0.00 0.00 0.00 00:10:41.769 =================================================================================================================== 00:10:41.769 Total : 7043.33 27.51 0.00 0.00 0.00 0.00 0.00 00:10:41.769 00:10:42.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.703 Nvme0n1 : 7.00 6890.00 26.91 0.00 0.00 0.00 0.00 0.00 00:10:42.703 =================================================================================================================== 00:10:42.703 Total : 6890.00 26.91 0.00 0.00 0.00 0.00 0.00 00:10:42.703 00:10:43.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.638 Nvme0n1 : 8.00 6919.88 27.03 0.00 0.00 0.00 0.00 0.00 00:10:43.638 =================================================================================================================== 00:10:43.638 Total : 6919.88 27.03 0.00 0.00 0.00 0.00 0.00 00:10:43.638 00:10:44.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.573 Nvme0n1 : 9.00 6949.56 27.15 0.00 0.00 0.00 0.00 0.00 00:10:44.573 =================================================================================================================== 00:10:44.573 Total : 6949.56 27.15 0.00 0.00 0.00 0.00 0.00 00:10:44.573 00:10:45.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.550 Nvme0n1 : 10.00 6928.40 27.06 0.00 0.00 0.00 0.00 0.00 00:10:45.550 =================================================================================================================== 00:10:45.550 Total : 6928.40 27.06 0.00 0.00 0.00 0.00 0.00 00:10:45.550 00:10:45.550 00:10:45.550 Latency(us) 00:10:45.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.550 Nvme0n1 : 10.01 6934.34 27.09 0.00 0.00 18453.62 7685.59 287881.77 00:10:45.550 =================================================================================================================== 00:10:45.550 Total : 6934.34 27.09 0.00 0.00 18453.62 7685.59 287881.77 00:10:45.550 0 00:10:45.550 22:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74120 00:10:45.550 22:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 74120 ']' 00:10:45.550 22:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 74120 00:10:45.550 22:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:10:45.550 22:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:45.550 22:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74120 00:10:45.550 killing process with pid 74120 00:10:45.550 Received shutdown signal, test time was about 10.000000 seconds 00:10:45.550 00:10:45.550 Latency(us) 00:10:45.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.550 =================================================================================================================== 00:10:45.550 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:45.550 22:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:45.550 22:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:45.550 22:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74120' 00:10:45.550 22:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 74120 00:10:45.550 22:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 74120 00:10:45.808 22:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:46.067 22:05:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:46.326 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894911d3-8fea-40ee-807e-ef4eed54ab5b 00:10:46.326 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73517 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73517 00:10:46.584 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73517 Killed "${NVMF_APP[@]}" "$@" 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=74331 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 74331 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74331 ']' 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:46.584 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:46.843 [2024-07-15 22:05:33.582584] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:10:46.843 [2024-07-15 22:05:33.582739] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.843 [2024-07-15 22:05:33.728497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.102 [2024-07-15 22:05:33.797933] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.102 [2024-07-15 22:05:33.797999] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.103 [2024-07-15 22:05:33.798011] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.103 [2024-07-15 22:05:33.798020] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.103 [2024-07-15 22:05:33.798027] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.103 [2024-07-15 22:05:33.798062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.103 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:47.103 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:47.103 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:47.103 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:47.103 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:47.103 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.103 22:05:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:47.361 [2024-07-15 22:05:34.285017] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:47.361 [2024-07-15 22:05:34.285291] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:47.361 [2024-07-15 22:05:34.285468] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:47.620 22:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:47.620 22:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 18c180fc-ace2-42a5-a4b4-e4538c8c784c 00:10:47.620 22:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=18c180fc-ace2-42a5-a4b4-e4538c8c784c 00:10:47.620 22:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:47.620 22:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:47.620 22:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:47.620 22:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:47.620 22:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:47.879 22:05:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 18c180fc-ace2-42a5-a4b4-e4538c8c784c -t 2000 00:10:48.136 [ 00:10:48.136 { 00:10:48.136 "aliases": [ 00:10:48.136 "lvs/lvol" 00:10:48.136 ], 00:10:48.136 "assigned_rate_limits": { 00:10:48.136 "r_mbytes_per_sec": 0, 00:10:48.136 "rw_ios_per_sec": 0, 00:10:48.136 "rw_mbytes_per_sec": 0, 00:10:48.136 "w_mbytes_per_sec": 0 00:10:48.136 }, 00:10:48.136 "block_size": 4096, 00:10:48.136 "claimed": false, 00:10:48.136 "driver_specific": { 00:10:48.136 "lvol": { 00:10:48.136 "base_bdev": "aio_bdev", 00:10:48.136 "clone": false, 00:10:48.136 "esnap_clone": false, 00:10:48.136 "lvol_store_uuid": "894911d3-8fea-40ee-807e-ef4eed54ab5b", 00:10:48.136 "num_allocated_clusters": 38, 00:10:48.136 "snapshot": false, 00:10:48.136 "thin_provision": false 00:10:48.136 } 00:10:48.136 }, 00:10:48.136 "name": "18c180fc-ace2-42a5-a4b4-e4538c8c784c", 00:10:48.136 "num_blocks": 38912, 00:10:48.136 "product_name": "Logical Volume", 00:10:48.136 "supported_io_types": { 00:10:48.136 "abort": false, 00:10:48.136 "compare": false, 00:10:48.136 "compare_and_write": false, 00:10:48.136 "copy": false, 00:10:48.136 "flush": false, 00:10:48.137 "get_zone_info": false, 00:10:48.137 "nvme_admin": false, 00:10:48.137 "nvme_io": false, 00:10:48.137 "nvme_io_md": false, 00:10:48.137 "nvme_iov_md": false, 00:10:48.137 "read": true, 00:10:48.137 "reset": true, 00:10:48.137 "seek_data": true, 00:10:48.137 "seek_hole": true, 00:10:48.137 "unmap": true, 00:10:48.137 "write": true, 00:10:48.137 "write_zeroes": true, 00:10:48.137 "zcopy": false, 00:10:48.137 "zone_append": false, 00:10:48.137 "zone_management": false 00:10:48.137 }, 00:10:48.137 "uuid": "18c180fc-ace2-42a5-a4b4-e4538c8c784c", 00:10:48.137 "zoned": false 00:10:48.137 } 00:10:48.137 ] 00:10:48.137 22:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:48.137 22:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894911d3-8fea-40ee-807e-ef4eed54ab5b 00:10:48.137 22:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:48.703 22:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:48.703 22:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894911d3-8fea-40ee-807e-ef4eed54ab5b 00:10:48.703 22:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:48.961 22:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:48.961 22:05:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:49.221 [2024-07-15 22:05:36.163944] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:49.479 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894911d3-8fea-40ee-807e-ef4eed54ab5b 00:10:49.479 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:10:49.479 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894911d3-8fea-40ee-807e-ef4eed54ab5b 00:10:49.479 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:49.479 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.479 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:49.479 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.479 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:49.479 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.479 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:49.479 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:49.479 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894911d3-8fea-40ee-807e-ef4eed54ab5b 00:10:49.737 2024/07/15 22:05:36 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:894911d3-8fea-40ee-807e-ef4eed54ab5b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:49.737 request: 00:10:49.737 { 00:10:49.737 "method": "bdev_lvol_get_lvstores", 00:10:49.737 "params": { 00:10:49.737 "uuid": "894911d3-8fea-40ee-807e-ef4eed54ab5b" 00:10:49.737 } 00:10:49.737 } 00:10:49.737 Got JSON-RPC error response 00:10:49.737 GoRPCClient: error on JSON-RPC call 00:10:49.737 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:10:49.737 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:49.737 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:49.737 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:49.737 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:49.995 aio_bdev 00:10:49.995 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 18c180fc-ace2-42a5-a4b4-e4538c8c784c 00:10:49.995 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=18c180fc-ace2-42a5-a4b4-e4538c8c784c 00:10:49.995 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:49.995 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:49.995 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:49.995 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:49.995 22:05:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:50.252 22:05:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 18c180fc-ace2-42a5-a4b4-e4538c8c784c -t 2000 00:10:50.512 [ 00:10:50.512 { 00:10:50.512 "aliases": [ 00:10:50.512 "lvs/lvol" 00:10:50.512 ], 00:10:50.512 "assigned_rate_limits": { 00:10:50.512 "r_mbytes_per_sec": 0, 00:10:50.512 "rw_ios_per_sec": 0, 00:10:50.512 "rw_mbytes_per_sec": 0, 00:10:50.512 "w_mbytes_per_sec": 0 00:10:50.512 }, 00:10:50.512 "block_size": 4096, 00:10:50.512 "claimed": false, 00:10:50.512 "driver_specific": { 00:10:50.512 "lvol": { 00:10:50.512 "base_bdev": "aio_bdev", 00:10:50.512 "clone": false, 00:10:50.512 "esnap_clone": false, 00:10:50.512 "lvol_store_uuid": "894911d3-8fea-40ee-807e-ef4eed54ab5b", 00:10:50.512 "num_allocated_clusters": 38, 00:10:50.512 "snapshot": false, 00:10:50.512 "thin_provision": false 00:10:50.512 } 00:10:50.512 }, 00:10:50.512 "name": "18c180fc-ace2-42a5-a4b4-e4538c8c784c", 00:10:50.512 "num_blocks": 38912, 00:10:50.512 "product_name": "Logical Volume", 00:10:50.512 "supported_io_types": { 00:10:50.512 "abort": false, 00:10:50.512 "compare": false, 00:10:50.512 "compare_and_write": false, 00:10:50.512 "copy": false, 00:10:50.512 "flush": false, 00:10:50.512 "get_zone_info": false, 00:10:50.512 "nvme_admin": false, 00:10:50.512 "nvme_io": false, 00:10:50.512 "nvme_io_md": false, 00:10:50.512 "nvme_iov_md": false, 00:10:50.512 "read": true, 00:10:50.512 "reset": true, 00:10:50.512 "seek_data": true, 00:10:50.512 "seek_hole": true, 00:10:50.512 "unmap": true, 00:10:50.512 "write": true, 00:10:50.512 "write_zeroes": true, 00:10:50.512 "zcopy": false, 00:10:50.512 "zone_append": false, 00:10:50.512 "zone_management": false 00:10:50.512 }, 00:10:50.512 "uuid": "18c180fc-ace2-42a5-a4b4-e4538c8c784c", 00:10:50.512 "zoned": false 00:10:50.512 } 00:10:50.512 ] 00:10:50.512 22:05:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:50.512 22:05:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:50.512 22:05:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894911d3-8fea-40ee-807e-ef4eed54ab5b 00:10:50.772 22:05:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:50.772 22:05:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 894911d3-8fea-40ee-807e-ef4eed54ab5b 00:10:50.772 22:05:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:51.031 22:05:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:51.031 22:05:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 18c180fc-ace2-42a5-a4b4-e4538c8c784c 00:10:51.289 22:05:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 894911d3-8fea-40ee-807e-ef4eed54ab5b 00:10:51.548 22:05:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:52.115 22:05:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:52.373 ************************************ 00:10:52.373 END TEST lvs_grow_dirty 00:10:52.373 ************************************ 00:10:52.373 00:10:52.373 real 0m21.580s 00:10:52.373 user 0m45.789s 00:10:52.373 sys 0m8.086s 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:52.373 nvmf_trace.0 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:52.373 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:52.373 rmmod nvme_tcp 00:10:52.373 rmmod nvme_fabrics 00:10:52.632 rmmod nvme_keyring 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 74331 ']' 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 74331 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 74331 ']' 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 74331 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74331 00:10:52.632 killing process with pid 74331 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74331' 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 74331 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 74331 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:52.632 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.891 22:05:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:52.891 00:10:52.891 real 0m43.243s 00:10:52.891 user 1m11.700s 00:10:52.891 sys 0m11.101s 00:10:52.891 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.891 22:05:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:52.891 ************************************ 00:10:52.891 END TEST nvmf_lvs_grow 00:10:52.891 ************************************ 00:10:52.891 22:05:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:52.891 22:05:39 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:52.891 22:05:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:52.891 22:05:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.891 22:05:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:52.891 ************************************ 00:10:52.891 START TEST nvmf_bdev_io_wait 00:10:52.891 ************************************ 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:52.891 * Looking for test storage... 00:10:52.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.891 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:52.892 Cannot find device "nvmf_tgt_br" 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:52.892 Cannot find device "nvmf_tgt_br2" 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:52.892 Cannot find device "nvmf_tgt_br" 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:52.892 Cannot find device "nvmf_tgt_br2" 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:10:52.892 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:53.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:53.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:53.150 22:05:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:53.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:10:53.150 00:10:53.150 --- 10.0.0.2 ping statistics --- 00:10:53.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.150 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:53.150 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:53.150 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:10:53.150 00:10:53.150 --- 10.0.0.3 ping statistics --- 00:10:53.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.150 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:53.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:10:53.150 00:10:53.150 --- 10.0.0.1 ping statistics --- 00:10:53.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.150 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:53.150 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:53.408 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:53.408 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:53.408 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:53.408 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.408 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=74737 00:10:53.408 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:53.408 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 74737 00:10:53.408 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 74737 ']' 00:10:53.408 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.408 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:53.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.408 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.408 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:53.408 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.408 [2024-07-15 22:05:40.180861] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:10:53.408 [2024-07-15 22:05:40.180953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.408 [2024-07-15 22:05:40.319303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.667 [2024-07-15 22:05:40.397071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.667 [2024-07-15 22:05:40.397151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.667 [2024-07-15 22:05:40.397165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.667 [2024-07-15 22:05:40.397175] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.667 [2024-07-15 22:05:40.397184] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.667 [2024-07-15 22:05:40.397258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.667 [2024-07-15 22:05:40.398222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.667 [2024-07-15 22:05:40.398353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.667 [2024-07-15 22:05:40.398363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.667 [2024-07-15 22:05:40.556209] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.667 Malloc0 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.667 [2024-07-15 22:05:40.606876] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74781 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=74783 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:53.667 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:53.667 { 00:10:53.667 "params": { 00:10:53.667 "name": "Nvme$subsystem", 00:10:53.667 "trtype": "$TEST_TRANSPORT", 00:10:53.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.667 "adrfam": "ipv4", 00:10:53.667 "trsvcid": "$NVMF_PORT", 00:10:53.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.667 "hdgst": ${hdgst:-false}, 00:10:53.667 "ddgst": ${ddgst:-false} 00:10:53.667 }, 00:10:53.667 "method": "bdev_nvme_attach_controller" 00:10:53.667 } 00:10:53.667 EOF 00:10:53.667 )") 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74785 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74787 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:53.926 { 00:10:53.926 "params": { 00:10:53.926 "name": "Nvme$subsystem", 00:10:53.926 "trtype": "$TEST_TRANSPORT", 00:10:53.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.926 "adrfam": "ipv4", 00:10:53.926 "trsvcid": "$NVMF_PORT", 00:10:53.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.926 "hdgst": ${hdgst:-false}, 00:10:53.926 "ddgst": ${ddgst:-false} 00:10:53.926 }, 00:10:53.926 "method": "bdev_nvme_attach_controller" 00:10:53.926 } 00:10:53.926 EOF 00:10:53.926 )") 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:53.926 { 00:10:53.926 "params": { 00:10:53.926 "name": "Nvme$subsystem", 00:10:53.926 "trtype": "$TEST_TRANSPORT", 00:10:53.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.926 "adrfam": "ipv4", 00:10:53.926 "trsvcid": "$NVMF_PORT", 00:10:53.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.926 "hdgst": ${hdgst:-false}, 00:10:53.926 "ddgst": ${ddgst:-false} 00:10:53.926 }, 00:10:53.926 "method": "bdev_nvme_attach_controller" 00:10:53.926 } 00:10:53.926 EOF 00:10:53.926 )") 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:53.926 { 00:10:53.926 "params": { 00:10:53.926 "name": "Nvme$subsystem", 00:10:53.926 "trtype": "$TEST_TRANSPORT", 00:10:53.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.926 "adrfam": "ipv4", 00:10:53.926 "trsvcid": "$NVMF_PORT", 00:10:53.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.926 "hdgst": ${hdgst:-false}, 00:10:53.926 "ddgst": ${ddgst:-false} 00:10:53.926 }, 00:10:53.926 "method": "bdev_nvme_attach_controller" 00:10:53.926 } 00:10:53.926 EOF 00:10:53.926 )") 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:53.926 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:53.926 "params": { 00:10:53.926 "name": "Nvme1", 00:10:53.926 "trtype": "tcp", 00:10:53.926 "traddr": "10.0.0.2", 00:10:53.927 "adrfam": "ipv4", 00:10:53.927 "trsvcid": "4420", 00:10:53.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.927 "hdgst": false, 00:10:53.927 "ddgst": false 00:10:53.927 }, 00:10:53.927 "method": "bdev_nvme_attach_controller" 00:10:53.927 }' 00:10:53.927 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:53.927 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:53.927 "params": { 00:10:53.927 "name": "Nvme1", 00:10:53.927 "trtype": "tcp", 00:10:53.927 "traddr": "10.0.0.2", 00:10:53.927 "adrfam": "ipv4", 00:10:53.927 "trsvcid": "4420", 00:10:53.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.927 "hdgst": false, 00:10:53.927 "ddgst": false 00:10:53.927 }, 00:10:53.927 "method": "bdev_nvme_attach_controller" 00:10:53.927 }' 00:10:53.927 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:53.927 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:53.927 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:53.927 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:53.927 "params": { 00:10:53.927 "name": "Nvme1", 00:10:53.927 "trtype": "tcp", 00:10:53.927 "traddr": "10.0.0.2", 00:10:53.927 "adrfam": "ipv4", 00:10:53.927 "trsvcid": "4420", 00:10:53.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.927 "hdgst": false, 00:10:53.927 "ddgst": false 00:10:53.927 }, 00:10:53.927 "method": "bdev_nvme_attach_controller" 00:10:53.927 }' 00:10:53.927 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:53.927 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:53.927 "params": { 00:10:53.927 "name": "Nvme1", 00:10:53.927 "trtype": "tcp", 00:10:53.927 "traddr": "10.0.0.2", 00:10:53.927 "adrfam": "ipv4", 00:10:53.927 "trsvcid": "4420", 00:10:53.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.927 "hdgst": false, 00:10:53.927 "ddgst": false 00:10:53.927 }, 00:10:53.927 "method": "bdev_nvme_attach_controller" 00:10:53.927 }' 00:10:53.927 [2024-07-15 22:05:40.677267] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:10:53.927 [2024-07-15 22:05:40.677359] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:53.927 22:05:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 74781 00:10:53.927 [2024-07-15 22:05:40.729683] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:10:53.927 [2024-07-15 22:05:40.729820] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:53.927 [2024-07-15 22:05:40.738015] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:10:53.927 [2024-07-15 22:05:40.738061] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:10:53.927 [2024-07-15 22:05:40.738148] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:53.927 [2024-07-15 22:05:40.738174] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:53.927 [2024-07-15 22:05:40.856976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.185 [2024-07-15 22:05:40.916304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.185 [2024-07-15 22:05:40.929454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:54.185 [2024-07-15 22:05:40.959857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.185 [2024-07-15 22:05:40.987802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:54.185 [2024-07-15 22:05:41.008689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.185 [2024-07-15 22:05:41.017457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:54.185 [2024-07-15 22:05:41.058553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:54.185 Running I/O for 1 seconds... 00:10:54.185 Running I/O for 1 seconds... 00:10:54.443 Running I/O for 1 seconds... 00:10:54.443 Running I/O for 1 seconds... 00:10:55.378 00:10:55.378 Latency(us) 00:10:55.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.378 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:55.378 Nvme1n1 : 1.03 4495.06 17.56 0.00 0.00 28107.00 11796.48 44326.17 00:10:55.378 =================================================================================================================== 00:10:55.378 Total : 4495.06 17.56 0.00 0.00 28107.00 11796.48 44326.17 00:10:55.378 00:10:55.378 Latency(us) 00:10:55.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.378 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:55.378 Nvme1n1 : 1.00 179753.45 702.16 0.00 0.00 709.28 288.58 1131.99 00:10:55.378 =================================================================================================================== 00:10:55.378 Total : 179753.45 702.16 0.00 0.00 709.28 288.58 1131.99 00:10:55.378 00:10:55.378 Latency(us) 00:10:55.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.378 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:55.378 Nvme1n1 : 1.01 8243.30 32.20 0.00 0.00 15431.24 4379.00 30504.03 00:10:55.378 =================================================================================================================== 00:10:55.378 Total : 8243.30 32.20 0.00 0.00 15431.24 4379.00 30504.03 00:10:55.378 00:10:55.378 Latency(us) 00:10:55.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.378 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:55.378 Nvme1n1 : 1.01 4205.63 16.43 0.00 0.00 30258.94 11141.12 57195.05 00:10:55.378 =================================================================================================================== 00:10:55.378 Total : 4205.63 16.43 0.00 0.00 30258.94 11141.12 57195.05 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 74783 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 74785 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 74787 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:55.637 rmmod nvme_tcp 00:10:55.637 rmmod nvme_fabrics 00:10:55.637 rmmod nvme_keyring 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 74737 ']' 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 74737 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 74737 ']' 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 74737 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74737 00:10:55.637 killing process with pid 74737 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74737' 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 74737 00:10:55.637 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 74737 00:10:55.895 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:55.895 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:55.895 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:55.895 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:55.895 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:55.895 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.895 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.895 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.895 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:55.895 00:10:55.895 real 0m3.135s 00:10:55.895 user 0m13.825s 00:10:55.895 sys 0m1.893s 00:10:55.895 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.895 22:05:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:55.895 ************************************ 00:10:55.895 END TEST nvmf_bdev_io_wait 00:10:55.895 ************************************ 00:10:55.895 22:05:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:55.895 22:05:42 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:55.895 22:05:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:55.895 22:05:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.895 22:05:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:55.895 ************************************ 00:10:55.895 START TEST nvmf_queue_depth 00:10:55.895 ************************************ 00:10:55.895 22:05:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:56.155 * Looking for test storage... 00:10:56.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:56.155 Cannot find device "nvmf_tgt_br" 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:56.155 Cannot find device "nvmf_tgt_br2" 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:56.155 22:05:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:56.155 Cannot find device "nvmf_tgt_br" 00:10:56.155 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:10:56.155 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:56.155 Cannot find device "nvmf_tgt_br2" 00:10:56.155 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:10:56.155 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:56.155 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:56.155 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:56.155 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:56.155 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:56.155 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:56.155 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:56.155 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:56.155 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:56.155 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:56.155 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:56.155 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:56.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:10:56.414 00:10:56.414 --- 10.0.0.2 ping statistics --- 00:10:56.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.414 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:56.414 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:56.414 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:10:56.414 00:10:56.414 --- 10.0.0.3 ping statistics --- 00:10:56.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.414 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:56.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:10:56.414 00:10:56.414 --- 10.0.0.1 ping statistics --- 00:10:56.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.414 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=74992 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 74992 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 74992 ']' 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.414 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.415 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:56.673 [2024-07-15 22:05:43.415780] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:10:56.673 [2024-07-15 22:05:43.415904] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.673 [2024-07-15 22:05:43.553650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.673 [2024-07-15 22:05:43.612586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.673 [2024-07-15 22:05:43.612661] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.673 [2024-07-15 22:05:43.612672] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.673 [2024-07-15 22:05:43.612681] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.673 [2024-07-15 22:05:43.612688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.673 [2024-07-15 22:05:43.612724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:56.929 [2024-07-15 22:05:43.769828] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:56.929 Malloc0 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.929 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:56.929 [2024-07-15 22:05:43.821152] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.930 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.930 22:05:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=75023 00:10:56.930 22:05:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:56.930 22:05:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:56.930 22:05:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 75023 /var/tmp/bdevperf.sock 00:10:56.930 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75023 ']' 00:10:56.930 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:56.930 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:56.930 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:56.930 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.930 22:05:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:57.187 [2024-07-15 22:05:43.887843] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:10:57.187 [2024-07-15 22:05:43.887980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75023 ] 00:10:57.187 [2024-07-15 22:05:44.024193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.187 [2024-07-15 22:05:44.084248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.116 22:05:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.116 22:05:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:58.116 22:05:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:58.117 22:05:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.117 22:05:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.117 NVMe0n1 00:10:58.117 22:05:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.117 22:05:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:58.373 Running I/O for 10 seconds... 00:11:08.386 00:11:08.386 Latency(us) 00:11:08.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.386 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:08.386 Verification LBA range: start 0x0 length 0x4000 00:11:08.386 NVMe0n1 : 10.09 7772.83 30.36 0.00 0.00 131071.55 29908.25 124875.87 00:11:08.386 =================================================================================================================== 00:11:08.386 Total : 7772.83 30.36 0.00 0.00 131071.55 29908.25 124875.87 00:11:08.386 0 00:11:08.644 22:05:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 75023 00:11:08.644 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75023 ']' 00:11:08.644 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75023 00:11:08.644 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:08.644 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:08.644 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75023 00:11:08.644 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:08.644 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:08.644 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75023' 00:11:08.644 killing process with pid 75023 00:11:08.644 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75023 00:11:08.644 Received shutdown signal, test time was about 10.000000 seconds 00:11:08.644 00:11:08.644 Latency(us) 00:11:08.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.644 =================================================================================================================== 00:11:08.645 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:08.645 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75023 00:11:08.645 22:05:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:08.645 22:05:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:08.645 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:08.645 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:08.903 rmmod nvme_tcp 00:11:08.903 rmmod nvme_fabrics 00:11:08.903 rmmod nvme_keyring 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 74992 ']' 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 74992 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 74992 ']' 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 74992 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74992 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:08.903 killing process with pid 74992 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74992' 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 74992 00:11:08.903 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 74992 00:11:09.162 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:09.162 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:09.162 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:09.162 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:09.162 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:09.162 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.162 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.162 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.162 22:05:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:09.162 00:11:09.162 real 0m13.051s 00:11:09.162 user 0m22.957s 00:11:09.162 sys 0m2.090s 00:11:09.162 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.162 22:05:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.162 ************************************ 00:11:09.162 END TEST nvmf_queue_depth 00:11:09.162 ************************************ 00:11:09.162 22:05:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:09.162 22:05:55 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:09.162 22:05:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:09.162 22:05:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.162 22:05:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.162 ************************************ 00:11:09.162 START TEST nvmf_target_multipath 00:11:09.162 ************************************ 00:11:09.162 22:05:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:09.162 * Looking for test storage... 00:11:09.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.162 22:05:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:09.163 Cannot find device "nvmf_tgt_br" 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:09.163 Cannot find device "nvmf_tgt_br2" 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:11:09.163 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:09.422 Cannot find device "nvmf_tgt_br" 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:09.422 Cannot find device "nvmf_tgt_br2" 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:09.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:09.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:09.422 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:09.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:11:09.681 00:11:09.681 --- 10.0.0.2 ping statistics --- 00:11:09.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.681 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:09.681 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:09.681 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:11:09.681 00:11:09.681 --- 10.0.0.3 ping statistics --- 00:11:09.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.681 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:09.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:11:09.681 00:11:09.681 --- 10.0.0.1 ping statistics --- 00:11:09.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.681 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=75355 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 75355 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 75355 ']' 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:09.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:09.681 22:05:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:09.681 [2024-07-15 22:05:56.492442] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:11:09.681 [2024-07-15 22:05:56.492570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.940 [2024-07-15 22:05:56.637496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.940 [2024-07-15 22:05:56.729297] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.940 [2024-07-15 22:05:56.729419] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.940 [2024-07-15 22:05:56.729437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.940 [2024-07-15 22:05:56.729450] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.940 [2024-07-15 22:05:56.729461] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.940 [2024-07-15 22:05:56.729613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.940 [2024-07-15 22:05:56.729714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.940 [2024-07-15 22:05:56.730280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.940 [2024-07-15 22:05:56.730298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.873 22:05:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:10.873 22:05:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:11:10.873 22:05:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:10.873 22:05:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:10.873 22:05:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:10.873 22:05:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.873 22:05:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:11.131 [2024-07-15 22:05:57.858401] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.131 22:05:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:11.389 Malloc0 00:11:11.647 22:05:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:11.905 22:05:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:12.164 22:05:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.730 [2024-07-15 22:05:59.379141] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.730 22:05:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:12.730 [2024-07-15 22:05:59.671864] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:12.988 22:05:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:11:12.988 22:05:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:13.246 22:06:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.246 22:06:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:11:13.246 22:06:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.246 22:06:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:13.247 22:06:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75502 00:11:15.772 22:06:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:15.772 [global] 00:11:15.772 thread=1 00:11:15.772 invalidate=1 00:11:15.772 rw=randrw 00:11:15.772 time_based=1 00:11:15.772 runtime=6 00:11:15.772 ioengine=libaio 00:11:15.772 direct=1 00:11:15.772 bs=4096 00:11:15.773 iodepth=128 00:11:15.773 norandommap=0 00:11:15.773 numjobs=1 00:11:15.773 00:11:15.773 verify_dump=1 00:11:15.773 verify_backlog=512 00:11:15.773 verify_state_save=0 00:11:15.773 do_verify=1 00:11:15.773 verify=crc32c-intel 00:11:15.773 [job0] 00:11:15.773 filename=/dev/nvme0n1 00:11:15.773 Could not set queue depth (nvme0n1) 00:11:15.773 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:15.773 fio-3.35 00:11:15.773 Starting 1 thread 00:11:16.338 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:16.595 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:16.852 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:16.852 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:16.852 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:16.852 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:16.852 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:16.852 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:16.852 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:16.852 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:16.852 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:16.852 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:16.852 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:16.852 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:16.852 22:06:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:17.785 22:06:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:17.785 22:06:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:17.785 22:06:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:17.785 22:06:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:18.351 22:06:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:18.609 22:06:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:18.609 22:06:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:18.609 22:06:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:18.609 22:06:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:18.609 22:06:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:18.609 22:06:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:18.609 22:06:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:18.609 22:06:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:18.609 22:06:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:18.609 22:06:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:18.609 22:06:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:18.609 22:06:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:18.609 22:06:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:19.985 22:06:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:19.985 22:06:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:19.985 22:06:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:19.985 22:06:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75502 00:11:21.887 00:11:21.887 job0: (groupid=0, jobs=1): err= 0: pid=75528: Mon Jul 15 22:06:08 2024 00:11:21.887 read: IOPS=7870, BW=30.7MiB/s (32.2MB/s)(185MiB/6001msec) 00:11:21.887 slat (usec): min=2, max=10544, avg=74.23, stdev=358.62 00:11:21.887 clat (usec): min=2466, max=38102, avg=11057.97, stdev=3230.07 00:11:21.887 lat (usec): min=2509, max=38128, avg=11132.19, stdev=3255.68 00:11:21.887 clat percentiles (usec): 00:11:21.887 | 1.00th=[ 6194], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[ 9241], 00:11:21.887 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10552], 60.00th=[10945], 00:11:21.887 | 70.00th=[11469], 80.00th=[11994], 90.00th=[13173], 95.00th=[15401], 00:11:21.887 | 99.00th=[26084], 99.50th=[27395], 99.90th=[31851], 99.95th=[32900], 00:11:21.887 | 99.99th=[38011] 00:11:21.887 bw ( KiB/s): min=11696, max=23536, per=54.08%, avg=17025.36, stdev=3422.88, samples=11 00:11:21.887 iops : min= 2924, max= 5884, avg=4256.27, stdev=855.69, samples=11 00:11:21.887 write: IOPS=4467, BW=17.5MiB/s (18.3MB/s)(96.1MiB/5508msec); 0 zone resets 00:11:21.887 slat (usec): min=3, max=3882, avg=87.72, stdev=240.08 00:11:21.887 clat (usec): min=1535, max=38033, avg=9699.55, stdev=3085.25 00:11:21.887 lat (usec): min=1591, max=38067, avg=9787.27, stdev=3105.73 00:11:21.887 clat percentiles (usec): 00:11:21.887 | 1.00th=[ 5014], 5.00th=[ 6521], 10.00th=[ 7308], 20.00th=[ 8225], 00:11:21.887 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:11:21.887 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[14353], 00:11:21.887 | 99.00th=[23200], 99.50th=[24249], 99.90th=[26084], 99.95th=[31589], 00:11:21.887 | 99.99th=[33817] 00:11:21.887 bw ( KiB/s): min=11624, max=22512, per=94.89%, avg=16956.91, stdev=3175.39, samples=11 00:11:21.887 iops : min= 2906, max= 5628, avg=4239.18, stdev=793.83, samples=11 00:11:21.887 lat (msec) : 2=0.01%, 4=0.17%, 10=47.39%, 20=48.85%, 50=3.58% 00:11:21.887 cpu : usr=4.93%, sys=20.40%, ctx=4374, majf=0, minf=72 00:11:21.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:21.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.887 issued rwts: total=47233,24607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.887 00:11:21.887 Run status group 0 (all jobs): 00:11:21.887 READ: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=185MiB (193MB), run=6001-6001msec 00:11:21.887 WRITE: bw=17.5MiB/s (18.3MB/s), 17.5MiB/s-17.5MiB/s (18.3MB/s-18.3MB/s), io=96.1MiB (101MB), run=5508-5508msec 00:11:21.887 00:11:21.887 Disk stats (read/write): 00:11:21.887 nvme0n1: ios=46049/24607, merge=0/0, ticks=481300/224029, in_queue=705329, util=98.58% 00:11:21.887 22:06:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:11:21.887 22:06:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:22.238 22:06:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:22.238 22:06:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:22.238 22:06:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:22.238 22:06:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:22.238 22:06:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:22.238 22:06:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:22.238 22:06:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:22.238 22:06:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:22.238 22:06:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:22.238 22:06:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:22.238 22:06:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:22.238 22:06:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:11:22.238 22:06:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:23.186 22:06:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:23.186 22:06:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:23.186 22:06:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:23.186 22:06:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:23.186 22:06:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=75655 00:11:23.186 22:06:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:23.186 22:06:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:23.186 [global] 00:11:23.186 thread=1 00:11:23.186 invalidate=1 00:11:23.186 rw=randrw 00:11:23.186 time_based=1 00:11:23.186 runtime=6 00:11:23.186 ioengine=libaio 00:11:23.186 direct=1 00:11:23.186 bs=4096 00:11:23.186 iodepth=128 00:11:23.186 norandommap=0 00:11:23.186 numjobs=1 00:11:23.186 00:11:23.186 verify_dump=1 00:11:23.186 verify_backlog=512 00:11:23.186 verify_state_save=0 00:11:23.186 do_verify=1 00:11:23.186 verify=crc32c-intel 00:11:23.186 [job0] 00:11:23.186 filename=/dev/nvme0n1 00:11:23.186 Could not set queue depth (nvme0n1) 00:11:23.443 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.443 fio-3.35 00:11:23.443 Starting 1 thread 00:11:24.374 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:24.631 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:24.889 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:24.889 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:24.889 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:24.889 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:24.889 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:24.889 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:24.889 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:24.889 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:24.889 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:24.889 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:24.889 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:24.889 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:24.889 22:06:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:25.821 22:06:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:25.821 22:06:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:25.821 22:06:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:25.821 22:06:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:26.387 22:06:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:26.645 22:06:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:26.645 22:06:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:26.645 22:06:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:26.645 22:06:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:26.645 22:06:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:26.645 22:06:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:26.645 22:06:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:26.645 22:06:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:26.645 22:06:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:26.645 22:06:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:26.645 22:06:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:26.645 22:06:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:26.645 22:06:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:28.016 22:06:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:28.016 22:06:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:28.016 22:06:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:28.016 22:06:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 75655 00:11:29.915 00:11:29.915 job0: (groupid=0, jobs=1): err= 0: pid=75680: Mon Jul 15 22:06:16 2024 00:11:29.915 read: IOPS=10.7k, BW=41.9MiB/s (43.9MB/s)(252MiB/6007msec) 00:11:29.915 slat (usec): min=3, max=6230, avg=47.06, stdev=225.73 00:11:29.915 clat (usec): min=170, max=51810, avg=8186.04, stdev=2725.96 00:11:29.915 lat (usec): min=206, max=51823, avg=8233.10, stdev=2741.23 00:11:29.915 clat percentiles (usec): 00:11:29.915 | 1.00th=[ 1172], 5.00th=[ 2540], 10.00th=[ 4555], 20.00th=[ 6390], 00:11:29.915 | 30.00th=[ 7439], 40.00th=[ 7898], 50.00th=[ 8356], 60.00th=[ 8848], 00:11:29.915 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[11338], 95.00th=[12256], 00:11:29.915 | 99.00th=[14222], 99.50th=[15270], 99.90th=[17171], 99.95th=[17695], 00:11:29.915 | 99.99th=[22938] 00:11:29.915 bw ( KiB/s): min= 9208, max=36040, per=52.74%, avg=22630.00, stdev=7379.28, samples=12 00:11:29.915 iops : min= 2302, max= 9010, avg=5657.50, stdev=1844.82, samples=12 00:11:29.915 write: IOPS=6146, BW=24.0MiB/s (25.2MB/s)(133MiB/5519msec); 0 zone resets 00:11:29.915 slat (usec): min=12, max=3905, avg=64.49, stdev=145.70 00:11:29.915 clat (usec): min=128, max=20251, avg=6916.07, stdev=2476.99 00:11:29.915 lat (usec): min=193, max=20311, avg=6980.56, stdev=2489.02 00:11:29.915 clat percentiles (usec): 00:11:29.915 | 1.00th=[ 832], 5.00th=[ 1795], 10.00th=[ 3261], 20.00th=[ 4948], 00:11:29.915 | 30.00th=[ 6259], 40.00th=[ 6849], 50.00th=[ 7242], 60.00th=[ 7635], 00:11:29.915 | 70.00th=[ 8160], 80.00th=[ 8848], 90.00th=[ 9765], 95.00th=[10421], 00:11:29.915 | 99.00th=[12125], 99.50th=[12911], 99.90th=[15401], 99.95th=[16909], 00:11:29.915 | 99.99th=[19792] 00:11:29.915 bw ( KiB/s): min= 9416, max=35392, per=91.82%, avg=22574.67, stdev=7239.46, samples=12 00:11:29.915 iops : min= 2354, max= 8848, avg=5643.67, stdev=1809.87, samples=12 00:11:29.915 lat (usec) : 250=0.01%, 500=0.08%, 750=0.27%, 1000=0.63% 00:11:29.915 lat (msec) : 2=3.40%, 4=5.62%, 10=72.32%, 20=17.65%, 50=0.02% 00:11:29.915 lat (msec) : 100=0.01% 00:11:29.915 cpu : usr=6.64%, sys=28.26%, ctx=9442, majf=0, minf=72 00:11:29.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:29.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.915 issued rwts: total=64441,33922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.915 00:11:29.915 Run status group 0 (all jobs): 00:11:29.915 READ: bw=41.9MiB/s (43.9MB/s), 41.9MiB/s-41.9MiB/s (43.9MB/s-43.9MB/s), io=252MiB (264MB), run=6007-6007msec 00:11:29.916 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=133MiB (139MB), run=5519-5519msec 00:11:29.916 00:11:29.916 Disk stats (read/write): 00:11:29.916 nvme0n1: ios=63571/33348, merge=0/0, ticks=472516/200475, in_queue=672991, util=98.57% 00:11:29.916 22:06:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:29.916 22:06:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.916 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:11:29.916 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:29.916 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.916 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:29.916 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.916 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:11:29.916 22:06:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:30.175 rmmod nvme_tcp 00:11:30.175 rmmod nvme_fabrics 00:11:30.175 rmmod nvme_keyring 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 75355 ']' 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 75355 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 75355 ']' 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 75355 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75355 00:11:30.175 killing process with pid 75355 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75355' 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 75355 00:11:30.175 22:06:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 75355 00:11:30.433 22:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.433 22:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:30.433 22:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:30.433 22:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.433 22:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.433 22:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.433 22:06:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.433 22:06:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.433 22:06:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:30.433 ************************************ 00:11:30.433 END TEST nvmf_target_multipath 00:11:30.433 ************************************ 00:11:30.433 00:11:30.433 real 0m21.262s 00:11:30.433 user 1m23.843s 00:11:30.433 sys 0m7.254s 00:11:30.433 22:06:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.433 22:06:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:30.433 22:06:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:30.433 22:06:17 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:30.433 22:06:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:30.433 22:06:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.433 22:06:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:30.433 ************************************ 00:11:30.433 START TEST nvmf_zcopy 00:11:30.433 ************************************ 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:30.433 * Looking for test storage... 00:11:30.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:30.433 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:30.434 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:30.434 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:30.434 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:30.434 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.434 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:30.434 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:30.434 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:30.434 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:30.434 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:30.434 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:30.434 Cannot find device "nvmf_tgt_br" 00:11:30.434 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:11:30.434 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:30.691 Cannot find device "nvmf_tgt_br2" 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:30.691 Cannot find device "nvmf_tgt_br" 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:30.691 Cannot find device "nvmf_tgt_br2" 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:30.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:30.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:30.691 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:30.692 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:30.692 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:30.692 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:30.692 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:30.692 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:30.692 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:30.692 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:30.692 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:30.692 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:30.692 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:30.692 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:30.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:11:30.949 00:11:30.949 --- 10.0.0.2 ping statistics --- 00:11:30.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.949 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:30.949 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:30.949 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:11:30.949 00:11:30.949 --- 10.0.0.3 ping statistics --- 00:11:30.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.949 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:30.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:11:30.949 00:11:30.949 --- 10.0.0.1 ping statistics --- 00:11:30.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.949 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=75983 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 75983 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 75983 ']' 00:11:30.949 22:06:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.950 22:06:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:30.950 22:06:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.950 22:06:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:30.950 22:06:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:30.950 [2024-07-15 22:06:17.823427] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:11:30.950 [2024-07-15 22:06:17.823574] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.207 [2024-07-15 22:06:17.963595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.207 [2024-07-15 22:06:18.023069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.207 [2024-07-15 22:06:18.023165] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.207 [2024-07-15 22:06:18.023182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.207 [2024-07-15 22:06:18.023193] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.207 [2024-07-15 22:06:18.023204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.207 [2024-07-15 22:06:18.023248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:32.139 [2024-07-15 22:06:18.931439] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:32.139 [2024-07-15 22:06:18.947567] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:32.139 malloc0 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:32.139 { 00:11:32.139 "params": { 00:11:32.139 "name": "Nvme$subsystem", 00:11:32.139 "trtype": "$TEST_TRANSPORT", 00:11:32.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:32.139 "adrfam": "ipv4", 00:11:32.139 "trsvcid": "$NVMF_PORT", 00:11:32.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:32.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:32.139 "hdgst": ${hdgst:-false}, 00:11:32.139 "ddgst": ${ddgst:-false} 00:11:32.139 }, 00:11:32.139 "method": "bdev_nvme_attach_controller" 00:11:32.139 } 00:11:32.139 EOF 00:11:32.139 )") 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:32.139 22:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:32.139 "params": { 00:11:32.139 "name": "Nvme1", 00:11:32.139 "trtype": "tcp", 00:11:32.139 "traddr": "10.0.0.2", 00:11:32.139 "adrfam": "ipv4", 00:11:32.139 "trsvcid": "4420", 00:11:32.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:32.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:32.139 "hdgst": false, 00:11:32.139 "ddgst": false 00:11:32.139 }, 00:11:32.139 "method": "bdev_nvme_attach_controller" 00:11:32.139 }' 00:11:32.139 [2024-07-15 22:06:19.043269] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:11:32.139 [2024-07-15 22:06:19.043369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76040 ] 00:11:32.397 [2024-07-15 22:06:19.205542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.397 [2024-07-15 22:06:19.268518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.655 Running I/O for 10 seconds... 00:11:42.617 00:11:42.617 Latency(us) 00:11:42.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.617 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:42.617 Verification LBA range: start 0x0 length 0x1000 00:11:42.617 Nvme1n1 : 10.01 5477.68 42.79 0.00 0.00 23291.47 793.13 34555.35 00:11:42.617 =================================================================================================================== 00:11:42.617 Total : 5477.68 42.79 0.00 0.00 23291.47 793.13 34555.35 00:11:42.874 22:06:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76153 00:11:42.874 22:06:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:42.874 22:06:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:42.874 22:06:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:42.874 22:06:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:42.874 22:06:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:42.874 22:06:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.874 22:06:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:42.874 22:06:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:42.874 { 00:11:42.874 "params": { 00:11:42.874 "name": "Nvme$subsystem", 00:11:42.874 "trtype": "$TEST_TRANSPORT", 00:11:42.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:42.874 "adrfam": "ipv4", 00:11:42.874 "trsvcid": "$NVMF_PORT", 00:11:42.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:42.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:42.874 "hdgst": ${hdgst:-false}, 00:11:42.874 "ddgst": ${ddgst:-false} 00:11:42.874 }, 00:11:42.874 "method": "bdev_nvme_attach_controller" 00:11:42.874 } 00:11:42.874 EOF 00:11:42.874 )") 00:11:42.874 22:06:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:42.874 [2024-07-15 22:06:29.612888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.874 [2024-07-15 22:06:29.612937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.874 22:06:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:42.874 22:06:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:42.874 22:06:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:42.874 "params": { 00:11:42.874 "name": "Nvme1", 00:11:42.874 "trtype": "tcp", 00:11:42.874 "traddr": "10.0.0.2", 00:11:42.874 "adrfam": "ipv4", 00:11:42.874 "trsvcid": "4420", 00:11:42.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:42.874 "hdgst": false, 00:11:42.874 "ddgst": false 00:11:42.874 }, 00:11:42.874 "method": "bdev_nvme_attach_controller" 00:11:42.874 }' 00:11:42.874 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.874 [2024-07-15 22:06:29.624886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.874 [2024-07-15 22:06:29.624930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.874 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.874 [2024-07-15 22:06:29.636891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.874 [2024-07-15 22:06:29.636950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.874 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.874 [2024-07-15 22:06:29.648892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.874 [2024-07-15 22:06:29.648939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.660884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.660929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.672904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.672957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 [2024-07-15 22:06:29.674806] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:11:42.875 [2024-07-15 22:06:29.674937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76153 ] 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.684891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.684944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.696889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.696933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.708900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.708951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.720910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.720959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.732909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.732954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.744911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.744955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.756920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.756969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.768937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.768992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.780956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.781012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.792952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.793012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.800966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.801015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.808965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.809014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.875 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:42.875 [2024-07-15 22:06:29.820968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.875 [2024-07-15 22:06:29.821029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.133 [2024-07-15 22:06:29.823964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.133 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.133 [2024-07-15 22:06:29.832980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.133 [2024-07-15 22:06:29.833045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.133 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.133 [2024-07-15 22:06:29.844960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.133 [2024-07-15 22:06:29.845013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.133 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.133 [2024-07-15 22:06:29.856963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.133 [2024-07-15 22:06:29.857027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.133 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:29.868976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:29.869033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:29.880972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:29.881025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:29.892992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:29.893055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:29.905000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:29.905063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:29.911872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.134 [2024-07-15 22:06:29.916968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:29.917019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:29.929009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:29.929067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:29.940995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:29.941048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:29.952987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:29.953034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:29.965021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:29.965077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:29.976990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:29.977041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:29.984956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:29.985004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:29.997053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:29.997120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:30.009020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:30.009097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:30.021019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:30.021079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:30.033019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:30.033070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:30.045023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:30.045072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 [2024-07-15 22:06:30.057020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:30.057070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.134 Running I/O for 5 seconds... 00:11:43.134 [2024-07-15 22:06:30.074219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.134 [2024-07-15 22:06:30.074281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.134 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.392 [2024-07-15 22:06:30.091428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.392 [2024-07-15 22:06:30.091491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.392 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.392 [2024-07-15 22:06:30.108884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.392 [2024-07-15 22:06:30.108955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.392 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.392 [2024-07-15 22:06:30.127126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.392 [2024-07-15 22:06:30.127201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.392 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.392 [2024-07-15 22:06:30.141434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.392 [2024-07-15 22:06:30.141503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.392 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.392 [2024-07-15 22:06:30.152182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.392 [2024-07-15 22:06:30.152247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.392 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.393 [2024-07-15 22:06:30.166787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.393 [2024-07-15 22:06:30.166854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.393 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.393 [2024-07-15 22:06:30.182960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.393 [2024-07-15 22:06:30.183032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.393 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.393 [2024-07-15 22:06:30.200278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.393 [2024-07-15 22:06:30.200347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.393 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.393 [2024-07-15 22:06:30.216646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.393 [2024-07-15 22:06:30.216720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.393 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.393 [2024-07-15 22:06:30.232967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.393 [2024-07-15 22:06:30.233039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.393 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.393 [2024-07-15 22:06:30.248902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.393 [2024-07-15 22:06:30.248969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.393 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.393 [2024-07-15 22:06:30.265627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.393 [2024-07-15 22:06:30.265697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.393 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.393 [2024-07-15 22:06:30.281895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.393 [2024-07-15 22:06:30.281963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.393 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.393 [2024-07-15 22:06:30.298289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.393 [2024-07-15 22:06:30.298375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.393 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.393 [2024-07-15 22:06:30.308276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.393 [2024-07-15 22:06:30.308340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.393 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.393 [2024-07-15 22:06:30.322797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.393 [2024-07-15 22:06:30.322864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.393 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.393 [2024-07-15 22:06:30.338926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.393 [2024-07-15 22:06:30.338994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.393 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.652 [2024-07-15 22:06:30.349488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.652 [2024-07-15 22:06:30.349553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.652 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.652 [2024-07-15 22:06:30.364572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.652 [2024-07-15 22:06:30.364640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.652 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.652 [2024-07-15 22:06:30.380278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.652 [2024-07-15 22:06:30.380360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.652 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.652 [2024-07-15 22:06:30.397184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.652 [2024-07-15 22:06:30.397252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.652 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.652 [2024-07-15 22:06:30.413544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.652 [2024-07-15 22:06:30.413615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.652 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.652 [2024-07-15 22:06:30.429676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.652 [2024-07-15 22:06:30.429742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.652 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.652 [2024-07-15 22:06:30.446671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.652 [2024-07-15 22:06:30.446737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.652 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.652 [2024-07-15 22:06:30.463514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.652 [2024-07-15 22:06:30.463590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.652 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.652 [2024-07-15 22:06:30.480505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.652 [2024-07-15 22:06:30.480578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.652 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.652 [2024-07-15 22:06:30.496668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.652 [2024-07-15 22:06:30.496746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.653 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.653 [2024-07-15 22:06:30.507356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.653 [2024-07-15 22:06:30.507429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.653 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.653 [2024-07-15 22:06:30.523006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.653 [2024-07-15 22:06:30.523096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.653 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.653 [2024-07-15 22:06:30.539254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.653 [2024-07-15 22:06:30.539331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.653 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.653 [2024-07-15 22:06:30.550225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.653 [2024-07-15 22:06:30.550301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.653 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.653 [2024-07-15 22:06:30.566515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.653 [2024-07-15 22:06:30.566593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.653 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.653 [2024-07-15 22:06:30.581684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.653 [2024-07-15 22:06:30.581763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.653 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.653 [2024-07-15 22:06:30.591578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.653 [2024-07-15 22:06:30.591651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.653 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.911 [2024-07-15 22:06:30.606767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.911 [2024-07-15 22:06:30.606839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.911 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.911 [2024-07-15 22:06:30.624861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.911 [2024-07-15 22:06:30.624939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.911 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.911 [2024-07-15 22:06:30.639185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.911 [2024-07-15 22:06:30.639248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.911 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.911 [2024-07-15 22:06:30.649192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.911 [2024-07-15 22:06:30.649268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.911 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.911 [2024-07-15 22:06:30.664257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.911 [2024-07-15 22:06:30.664326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.911 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.911 [2024-07-15 22:06:30.680925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.911 [2024-07-15 22:06:30.680992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.911 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.911 [2024-07-15 22:06:30.697424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.912 [2024-07-15 22:06:30.697494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.912 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.912 [2024-07-15 22:06:30.709110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.912 [2024-07-15 22:06:30.709174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.912 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.912 [2024-07-15 22:06:30.725425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.912 [2024-07-15 22:06:30.725500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.912 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.912 [2024-07-15 22:06:30.741453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.912 [2024-07-15 22:06:30.741535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.912 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.912 [2024-07-15 22:06:30.759014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.912 [2024-07-15 22:06:30.759113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.912 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.912 [2024-07-15 22:06:30.774794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.912 [2024-07-15 22:06:30.774863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.912 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.912 [2024-07-15 22:06:30.785451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.912 [2024-07-15 22:06:30.785520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.912 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.912 [2024-07-15 22:06:30.800514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.912 [2024-07-15 22:06:30.800588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.912 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.912 [2024-07-15 22:06:30.817766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.912 [2024-07-15 22:06:30.817842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.912 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.912 [2024-07-15 22:06:30.834259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.912 [2024-07-15 22:06:30.834315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.912 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:43.912 [2024-07-15 22:06:30.850580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.912 [2024-07-15 22:06:30.850650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.912 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.170 [2024-07-15 22:06:30.866621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.170 [2024-07-15 22:06:30.866694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.170 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.170 [2024-07-15 22:06:30.883827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.170 [2024-07-15 22:06:30.883890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.170 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.170 [2024-07-15 22:06:30.899557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.170 [2024-07-15 22:06:30.899627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.170 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.170 [2024-07-15 22:06:30.916028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.170 [2024-07-15 22:06:30.916120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.170 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.170 [2024-07-15 22:06:30.933395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.171 [2024-07-15 22:06:30.933468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.171 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.171 [2024-07-15 22:06:30.948454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.171 [2024-07-15 22:06:30.948524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.171 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.171 [2024-07-15 22:06:30.964448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.171 [2024-07-15 22:06:30.964507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.171 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.171 [2024-07-15 22:06:30.981448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.171 [2024-07-15 22:06:30.981507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.171 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.171 [2024-07-15 22:06:30.992604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.171 [2024-07-15 22:06:30.992658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.171 2024/07/15 22:06:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.171 [2024-07-15 22:06:31.003950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.171 [2024-07-15 22:06:31.004014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.171 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.171 [2024-07-15 22:06:31.015014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.171 [2024-07-15 22:06:31.015069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.171 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.171 [2024-07-15 22:06:31.031149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.171 [2024-07-15 22:06:31.031216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.171 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.171 [2024-07-15 22:06:31.048464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.171 [2024-07-15 22:06:31.048536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.171 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.171 [2024-07-15 22:06:31.060725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.171 [2024-07-15 22:06:31.060805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.171 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.171 [2024-07-15 22:06:31.077551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.171 [2024-07-15 22:06:31.077635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.171 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.171 [2024-07-15 22:06:31.088940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.171 [2024-07-15 22:06:31.089018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.171 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.171 [2024-07-15 22:06:31.100638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.171 [2024-07-15 22:06:31.100706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.171 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.171 [2024-07-15 22:06:31.115938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.171 [2024-07-15 22:06:31.116013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.429 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.429 [2024-07-15 22:06:31.126994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.429 [2024-07-15 22:06:31.127057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.429 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.429 [2024-07-15 22:06:31.143240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.429 [2024-07-15 22:06:31.143316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.429 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.429 [2024-07-15 22:06:31.159402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.429 [2024-07-15 22:06:31.159510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.429 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.429 [2024-07-15 22:06:31.175569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.430 [2024-07-15 22:06:31.175659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.430 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.430 [2024-07-15 22:06:31.186263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.430 [2024-07-15 22:06:31.186347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.430 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.430 [2024-07-15 22:06:31.201812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.430 [2024-07-15 22:06:31.201893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.430 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.430 [2024-07-15 22:06:31.217602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.430 [2024-07-15 22:06:31.217698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.430 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.430 [2024-07-15 22:06:31.234565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.430 [2024-07-15 22:06:31.234638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.430 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.430 [2024-07-15 22:06:31.251774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.430 [2024-07-15 22:06:31.251857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.430 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.430 [2024-07-15 22:06:31.268340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.430 [2024-07-15 22:06:31.268423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.430 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.430 [2024-07-15 22:06:31.284261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.430 [2024-07-15 22:06:31.284348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.430 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.430 [2024-07-15 22:06:31.294355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.430 [2024-07-15 22:06:31.294438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.430 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.430 [2024-07-15 22:06:31.309416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.430 [2024-07-15 22:06:31.309506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.430 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.430 [2024-07-15 22:06:31.321012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.430 [2024-07-15 22:06:31.321105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.430 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.430 [2024-07-15 22:06:31.337400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.430 [2024-07-15 22:06:31.337479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.430 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.430 [2024-07-15 22:06:31.352599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.430 [2024-07-15 22:06:31.352679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.430 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.430 [2024-07-15 22:06:31.369660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.430 [2024-07-15 22:06:31.369740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.430 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.710 [2024-07-15 22:06:31.386093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.710 [2024-07-15 22:06:31.386180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.710 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.710 [2024-07-15 22:06:31.403668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.710 [2024-07-15 22:06:31.403757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.710 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.710 [2024-07-15 22:06:31.420338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.710 [2024-07-15 22:06:31.420413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.710 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.710 [2024-07-15 22:06:31.436781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.710 [2024-07-15 22:06:31.436852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.710 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.710 [2024-07-15 22:06:31.455825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.710 [2024-07-15 22:06:31.455910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.711 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.711 [2024-07-15 22:06:31.467547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.711 [2024-07-15 22:06:31.467617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.711 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.711 [2024-07-15 22:06:31.478873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.711 [2024-07-15 22:06:31.478952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.711 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.711 [2024-07-15 22:06:31.494309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.711 [2024-07-15 22:06:31.494383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.711 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.711 [2024-07-15 22:06:31.510932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.711 [2024-07-15 22:06:31.511007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.711 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.711 [2024-07-15 22:06:31.529512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.711 [2024-07-15 22:06:31.529591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.711 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.711 [2024-07-15 22:06:31.540856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.711 [2024-07-15 22:06:31.540928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.711 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.711 [2024-07-15 22:06:31.556948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.711 [2024-07-15 22:06:31.557032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.711 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.711 [2024-07-15 22:06:31.574123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.711 [2024-07-15 22:06:31.574196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.711 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.711 [2024-07-15 22:06:31.585585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.711 [2024-07-15 22:06:31.585656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.711 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.711 [2024-07-15 22:06:31.597038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.711 [2024-07-15 22:06:31.597126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.711 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.711 [2024-07-15 22:06:31.612590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.711 [2024-07-15 22:06:31.612660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.711 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.711 [2024-07-15 22:06:31.629664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.711 [2024-07-15 22:06:31.629754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.711 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.645420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.645489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.655211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.655275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.670154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.670232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.681500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.681575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.696890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.696962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.708076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.708156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.723505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.723583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.740038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.740132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.757319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.757396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.773949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.774024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.790560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.790629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.807580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.807658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.824196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.824274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.835161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.835230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.850804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.850885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.866672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.866755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.877521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.877592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.893858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.893937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:44.970 [2024-07-15 22:06:31.909512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.970 [2024-07-15 22:06:31.909598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.970 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.229 [2024-07-15 22:06:31.920767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.229 [2024-07-15 22:06:31.920841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.229 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.229 [2024-07-15 22:06:31.936871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.229 [2024-07-15 22:06:31.936944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.229 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.229 [2024-07-15 22:06:31.955889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.229 [2024-07-15 22:06:31.955981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.229 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.229 [2024-07-15 22:06:31.972612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.229 [2024-07-15 22:06:31.972709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.267 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.267 [2024-07-15 22:06:31.985242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.267 [2024-07-15 22:06:31.985334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.267 2024/07/15 22:06:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.267 [2024-07-15 22:06:32.002604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.267 [2024-07-15 22:06:32.002668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.267 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.267 [2024-07-15 22:06:32.019582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.267 [2024-07-15 22:06:32.019641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.267 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.267 [2024-07-15 22:06:32.036356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.267 [2024-07-15 22:06:32.036423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.267 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.267 [2024-07-15 22:06:32.052434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.267 [2024-07-15 22:06:32.052512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.267 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.267 [2024-07-15 22:06:32.066556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.267 [2024-07-15 22:06:32.066644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.267 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.267 [2024-07-15 22:06:32.082395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.267 [2024-07-15 22:06:32.082490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.267 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.267 [2024-07-15 22:06:32.097695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.267 [2024-07-15 22:06:32.097788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.267 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.267 [2024-07-15 22:06:32.113776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.267 [2024-07-15 22:06:32.113852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.267 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.267 [2024-07-15 22:06:32.124642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.267 [2024-07-15 22:06:32.124709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.267 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.267 [2024-07-15 22:06:32.140308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.267 [2024-07-15 22:06:32.140378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.267 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.267 [2024-07-15 22:06:32.151276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.267 [2024-07-15 22:06:32.151346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.267 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.267 [2024-07-15 22:06:32.162818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.267 [2024-07-15 22:06:32.162887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.267 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.526 [2024-07-15 22:06:32.178712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.526 [2024-07-15 22:06:32.178775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.526 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.526 [2024-07-15 22:06:32.195413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.526 [2024-07-15 22:06:32.195478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.212019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.212114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.228897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.228968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.239091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.239148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.255337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.255405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.271719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.271791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.288947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.289024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.305164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.305237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.322582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.322665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.339028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.339109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.355729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.355794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.372449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.372526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.389186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.389266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.407393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.407464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.422512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.422586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.439469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.439537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.455974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.456041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.527 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.527 [2024-07-15 22:06:32.474384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.527 [2024-07-15 22:06:32.474452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.490485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.490555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.506649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.506744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.523593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.523675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.539997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.540076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.556672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.556749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.575235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.575304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.590857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.590949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.607144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.607224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.623541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.623629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.643028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.643135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.658277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.658349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.669241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.669314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.684286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.684363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.700952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.701025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.711314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.711380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:45.786 [2024-07-15 22:06:32.726155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.786 [2024-07-15 22:06:32.726223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.786 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.044 [2024-07-15 22:06:32.742856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.044 [2024-07-15 22:06:32.742913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.762383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.762452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.778063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.778139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.795607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.795676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.812026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.812110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.829133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.829200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.845774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.845835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.862383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.862443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.878599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.878659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.895910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.895969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.912244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.912308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.929168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.929237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.940205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.940278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.955655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.955727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.966508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.966564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.045 [2024-07-15 22:06:32.982170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.045 [2024-07-15 22:06:32.982230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.045 2024/07/15 22:06:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.303 [2024-07-15 22:06:32.998669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.303 [2024-07-15 22:06:32.998732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.303 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.303 [2024-07-15 22:06:33.014893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.303 [2024-07-15 22:06:33.014971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.303 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.303 [2024-07-15 22:06:33.026562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.303 [2024-07-15 22:06:33.026611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.303 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.303 [2024-07-15 22:06:33.041743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.304 [2024-07-15 22:06:33.041812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.304 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.304 [2024-07-15 22:06:33.058250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.304 [2024-07-15 22:06:33.058310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.304 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.304 [2024-07-15 22:06:33.074179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.304 [2024-07-15 22:06:33.074241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.304 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.304 [2024-07-15 22:06:33.090769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.304 [2024-07-15 22:06:33.090836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.304 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.304 [2024-07-15 22:06:33.109630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.304 [2024-07-15 22:06:33.109693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.304 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.304 [2024-07-15 22:06:33.124916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.304 [2024-07-15 22:06:33.124976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.304 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.304 [2024-07-15 22:06:33.141878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.304 [2024-07-15 22:06:33.141951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.304 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.304 [2024-07-15 22:06:33.158395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.304 [2024-07-15 22:06:33.158445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.304 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.304 [2024-07-15 22:06:33.175004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.304 [2024-07-15 22:06:33.175069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.304 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.304 [2024-07-15 22:06:33.191917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.304 [2024-07-15 22:06:33.191985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.304 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.304 [2024-07-15 22:06:33.208617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.304 [2024-07-15 22:06:33.208687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.304 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.304 [2024-07-15 22:06:33.225875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.304 [2024-07-15 22:06:33.225937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.304 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.304 [2024-07-15 22:06:33.243781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.304 [2024-07-15 22:06:33.243840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.304 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.259169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.259244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.270217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.270283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.286209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.286278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.302620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.302692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.319307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.319364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.335549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.335599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.351919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.351977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.368073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.368144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.384483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.384541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.401881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.401939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.418118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.418192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.434137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.434219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.444972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.445033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.459916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.459974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.470073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.470138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.484789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.484845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.562 [2024-07-15 22:06:33.501802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.562 [2024-07-15 22:06:33.501861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.562 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.820 [2024-07-15 22:06:33.517308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.820 [2024-07-15 22:06:33.517364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.820 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.820 [2024-07-15 22:06:33.528060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.820 [2024-07-15 22:06:33.528122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.820 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.820 [2024-07-15 22:06:33.542534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.820 [2024-07-15 22:06:33.542589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.553215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.553269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.567901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.567964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.578576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.578628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.593522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.593579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.609563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.609623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.619860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.619914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.635985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.636039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.650974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.651028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.665843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.665913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.684419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.684481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.699898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.699956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.716174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.716232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.727271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.727334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.742127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.742186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.758359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.758415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.821 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:46.821 [2024-07-15 22:06:33.768335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.821 [2024-07-15 22:06:33.768394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.080 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.080 [2024-07-15 22:06:33.783301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.080 [2024-07-15 22:06:33.783363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.080 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.080 [2024-07-15 22:06:33.799527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.080 [2024-07-15 22:06:33.799594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.080 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.080 [2024-07-15 22:06:33.815296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.080 [2024-07-15 22:06:33.815361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.080 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.080 [2024-07-15 22:06:33.833241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.080 [2024-07-15 22:06:33.833301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.080 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.080 [2024-07-15 22:06:33.849246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.080 [2024-07-15 22:06:33.849305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.080 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.080 [2024-07-15 22:06:33.865227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.080 [2024-07-15 22:06:33.865287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.080 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.080 [2024-07-15 22:06:33.882832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.080 [2024-07-15 22:06:33.882887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.080 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.080 [2024-07-15 22:06:33.898414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.080 [2024-07-15 22:06:33.898467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.080 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.080 [2024-07-15 22:06:33.908388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.080 [2024-07-15 22:06:33.908438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.080 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.080 [2024-07-15 22:06:33.924768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.080 [2024-07-15 22:06:33.924824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.080 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.080 [2024-07-15 22:06:33.940944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.080 [2024-07-15 22:06:33.941001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.080 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.080 [2024-07-15 22:06:33.957938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.080 [2024-07-15 22:06:33.957996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.081 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.081 [2024-07-15 22:06:33.974054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.081 [2024-07-15 22:06:33.974125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.081 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.081 [2024-07-15 22:06:33.991211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.081 [2024-07-15 22:06:33.991264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.081 2024/07/15 22:06:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.081 [2024-07-15 22:06:34.007215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.081 [2024-07-15 22:06:34.007270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.081 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.081 [2024-07-15 22:06:34.024585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.081 [2024-07-15 22:06:34.024642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.081 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.339 [2024-07-15 22:06:34.040194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.339 [2024-07-15 22:06:34.040254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.339 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.050380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.050425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.065356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.065409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.081521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.081574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.091281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.091331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.106336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.106390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.116565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.116622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.131061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.131131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.141559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.141611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.156532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.156588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.172881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.172938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.188255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.188303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.203933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.203980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.214446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.214491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.228915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.228964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.239565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.239615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.254315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.254372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.265280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.265338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.340 [2024-07-15 22:06:34.280077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.340 [2024-07-15 22:06:34.280145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.340 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.599 [2024-07-15 22:06:34.290172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.599 [2024-07-15 22:06:34.290215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.599 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.599 [2024-07-15 22:06:34.304839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.599 [2024-07-15 22:06:34.304892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.599 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.315608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.315648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.330636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.330688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.347361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.347411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.363287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.363342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.379617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.379669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.397892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.397939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.413007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.413053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.425116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.425162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.442871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.442925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.458304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.458379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.475355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.475409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.490961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.491016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.501773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.501824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.516522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.516577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.527411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.527458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.600 [2024-07-15 22:06:34.541827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.600 [2024-07-15 22:06:34.541882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.600 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.859 [2024-07-15 22:06:34.552119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.859 [2024-07-15 22:06:34.552166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.859 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.859 [2024-07-15 22:06:34.566940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.859 [2024-07-15 22:06:34.566994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.859 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.859 [2024-07-15 22:06:34.585975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.859 [2024-07-15 22:06:34.586023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.859 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.859 [2024-07-15 22:06:34.602062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.859 [2024-07-15 22:06:34.602154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.859 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.859 [2024-07-15 22:06:34.619768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.859 [2024-07-15 22:06:34.619836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.859 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.859 [2024-07-15 22:06:34.633119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.859 [2024-07-15 22:06:34.633188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.859 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.859 [2024-07-15 22:06:34.651967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.859 [2024-07-15 22:06:34.652036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.859 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.859 [2024-07-15 22:06:34.669461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.859 [2024-07-15 22:06:34.669538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.859 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.859 [2024-07-15 22:06:34.685031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.859 [2024-07-15 22:06:34.685123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.859 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.859 [2024-07-15 22:06:34.701565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.859 [2024-07-15 22:06:34.701629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.859 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.859 [2024-07-15 22:06:34.717589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.859 [2024-07-15 22:06:34.717647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.859 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.859 [2024-07-15 22:06:34.734837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.859 [2024-07-15 22:06:34.734896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.859 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.859 [2024-07-15 22:06:34.752931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.860 [2024-07-15 22:06:34.752996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.860 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.860 [2024-07-15 22:06:34.771317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.860 [2024-07-15 22:06:34.771386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.860 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.860 [2024-07-15 22:06:34.789232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.860 [2024-07-15 22:06:34.789289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.860 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:47.860 [2024-07-15 22:06:34.806478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:47.860 [2024-07-15 22:06:34.806540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:34.823753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:34.823816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:34.835542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:34.835593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:34.851977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:34.852039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:34.868800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:34.868863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:34.885599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:34.885664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:34.903198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:34.903251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:34.921670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:34.921725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:34.937754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:34.937807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:34.954365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:34.954414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:34.971253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:34.971300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:34.987558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:34.987609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:35.005530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:35.005584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:35.020840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:35.020892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:35.031453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:35.031498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:35.046222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:35.046270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.121 [2024-07-15 22:06:35.063421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.121 [2024-07-15 22:06:35.063484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.121 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.379 [2024-07-15 22:06:35.075164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.379 [2024-07-15 22:06:35.075210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.379 00:11:48.379 Latency(us) 00:11:48.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.379 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:48.379 Nvme1n1 : 5.01 10900.26 85.16 0.00 0.00 11727.73 5004.57 21567.30 00:11:48.379 =================================================================================================================== 00:11:48.379 Total : 10900.26 85.16 0.00 0.00 11727.73 5004.57 21567.30 00:11:48.379 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.379 [2024-07-15 22:06:35.087141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.379 [2024-07-15 22:06:35.087185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.379 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.379 [2024-07-15 22:06:35.099167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.379 [2024-07-15 22:06:35.099218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.379 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.379 [2024-07-15 22:06:35.111174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.379 [2024-07-15 22:06:35.111226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.379 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.379 [2024-07-15 22:06:35.123171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.379 [2024-07-15 22:06:35.123218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.379 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.379 [2024-07-15 22:06:35.135188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.379 [2024-07-15 22:06:35.135237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.379 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.379 [2024-07-15 22:06:35.147171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.379 [2024-07-15 22:06:35.147218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.379 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.379 [2024-07-15 22:06:35.159157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.379 [2024-07-15 22:06:35.159193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.379 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.380 [2024-07-15 22:06:35.171192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.380 [2024-07-15 22:06:35.171239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.380 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.380 [2024-07-15 22:06:35.183198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.380 [2024-07-15 22:06:35.183243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.380 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.380 [2024-07-15 22:06:35.195202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.380 [2024-07-15 22:06:35.195251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.380 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.380 [2024-07-15 22:06:35.207182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.380 [2024-07-15 22:06:35.207226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.380 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.380 [2024-07-15 22:06:35.219185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.380 [2024-07-15 22:06:35.219223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.380 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.380 [2024-07-15 22:06:35.231171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.380 [2024-07-15 22:06:35.231204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.380 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.380 [2024-07-15 22:06:35.243180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.380 [2024-07-15 22:06:35.243213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.380 2024/07/15 22:06:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:48.380 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76153) - No such process 00:11:48.380 22:06:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76153 00:11:48.380 22:06:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.380 22:06:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.380 22:06:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:48.380 22:06:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.380 22:06:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:48.380 22:06:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.380 22:06:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:48.380 delay0 00:11:48.380 22:06:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.380 22:06:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:48.380 22:06:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.380 22:06:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:48.380 22:06:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.380 22:06:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:48.638 [2024-07-15 22:06:35.438864] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:55.207 Initializing NVMe Controllers 00:11:55.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:55.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:55.207 Initialization complete. Launching workers. 00:11:55.207 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 71 00:11:55.207 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 358, failed to submit 33 00:11:55.207 success 167, unsuccess 191, failed 0 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:55.207 rmmod nvme_tcp 00:11:55.207 rmmod nvme_fabrics 00:11:55.207 rmmod nvme_keyring 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 75983 ']' 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 75983 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 75983 ']' 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 75983 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75983 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:55.207 killing process with pid 75983 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75983' 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 75983 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 75983 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:55.207 00:11:55.207 real 0m24.571s 00:11:55.207 user 0m39.622s 00:11:55.207 sys 0m6.612s 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.207 22:06:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:55.207 ************************************ 00:11:55.207 END TEST nvmf_zcopy 00:11:55.207 ************************************ 00:11:55.207 22:06:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:55.207 22:06:41 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:55.207 22:06:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:55.207 22:06:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.207 22:06:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:55.207 ************************************ 00:11:55.207 START TEST nvmf_nmic 00:11:55.207 ************************************ 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:55.207 * Looking for test storage... 00:11:55.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:55.207 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:55.208 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:55.208 Cannot find device "nvmf_tgt_br" 00:11:55.208 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:11:55.208 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:55.208 Cannot find device "nvmf_tgt_br2" 00:11:55.208 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:11:55.208 22:06:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:55.208 Cannot find device "nvmf_tgt_br" 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:55.208 Cannot find device "nvmf_tgt_br2" 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:55.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:55.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:55.208 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:55.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:11:55.466 00:11:55.466 --- 10.0.0.2 ping statistics --- 00:11:55.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.466 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:55.466 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:55.466 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:11:55.466 00:11:55.466 --- 10.0.0.3 ping statistics --- 00:11:55.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.466 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:55.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:55.466 00:11:55.466 --- 10.0.0.1 ping statistics --- 00:11:55.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.466 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:55.466 22:06:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:55.467 22:06:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:55.467 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=76484 00:11:55.467 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.467 22:06:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 76484 00:11:55.467 22:06:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 76484 ']' 00:11:55.467 22:06:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.467 22:06:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:55.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.467 22:06:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.467 22:06:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:55.467 22:06:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:55.467 [2024-07-15 22:06:42.400187] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:11:55.467 [2024-07-15 22:06:42.400280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.725 [2024-07-15 22:06:42.538193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.725 [2024-07-15 22:06:42.601587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.725 [2024-07-15 22:06:42.601641] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.725 [2024-07-15 22:06:42.601653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.725 [2024-07-15 22:06:42.601661] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.725 [2024-07-15 22:06:42.601668] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.725 [2024-07-15 22:06:42.601949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.725 [2024-07-15 22:06:42.602013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.725 [2024-07-15 22:06:42.602186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.725 [2024-07-15 22:06:42.602192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:56.660 [2024-07-15 22:06:43.467778] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:56.660 Malloc0 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:56.660 [2024-07-15 22:06:43.527005] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.660 test case1: single bdev can't be used in multiple subsystems 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:56.660 [2024-07-15 22:06:43.550886] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:56.660 [2024-07-15 22:06:43.550929] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:56.660 [2024-07-15 22:06:43.550941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.660 2024/07/15 22:06:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:56.660 request: 00:11:56.660 { 00:11:56.660 "method": "nvmf_subsystem_add_ns", 00:11:56.660 "params": { 00:11:56.660 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:56.660 "namespace": { 00:11:56.660 "bdev_name": "Malloc0", 00:11:56.660 "no_auto_visible": false 00:11:56.660 } 00:11:56.660 } 00:11:56.660 } 00:11:56.660 Got JSON-RPC error response 00:11:56.660 GoRPCClient: error on JSON-RPC call 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:56.660 Adding namespace failed - expected result. 00:11:56.660 test case2: host connect to nvmf target in multiple paths 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:56.660 [2024-07-15 22:06:43.563039] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.660 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.918 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:57.176 22:06:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.176 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:57.176 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.176 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:57.176 22:06:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:59.073 22:06:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:59.073 22:06:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:59.073 22:06:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.073 22:06:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:59.073 22:06:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.073 22:06:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:59.073 22:06:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:59.073 [global] 00:11:59.073 thread=1 00:11:59.073 invalidate=1 00:11:59.073 rw=write 00:11:59.073 time_based=1 00:11:59.073 runtime=1 00:11:59.073 ioengine=libaio 00:11:59.073 direct=1 00:11:59.073 bs=4096 00:11:59.073 iodepth=1 00:11:59.073 norandommap=0 00:11:59.073 numjobs=1 00:11:59.073 00:11:59.073 verify_dump=1 00:11:59.073 verify_backlog=512 00:11:59.073 verify_state_save=0 00:11:59.073 do_verify=1 00:11:59.073 verify=crc32c-intel 00:11:59.073 [job0] 00:11:59.073 filename=/dev/nvme0n1 00:11:59.073 Could not set queue depth (nvme0n1) 00:11:59.395 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:59.395 fio-3.35 00:11:59.395 Starting 1 thread 00:12:00.326 00:12:00.326 job0: (groupid=0, jobs=1): err= 0: pid=76588: Mon Jul 15 22:06:47 2024 00:12:00.326 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:12:00.326 slat (nsec): min=15005, max=56869, avg=21189.69, stdev=6878.52 00:12:00.326 clat (usec): min=130, max=385, avg=150.57, stdev=14.46 00:12:00.326 lat (usec): min=147, max=402, avg=171.76, stdev=17.20 00:12:00.326 clat percentiles (usec): 00:12:00.326 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:12:00.326 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:12:00.326 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 172], 00:12:00.326 | 99.00th=[ 198], 99.50th=[ 217], 99.90th=[ 318], 99.95th=[ 371], 00:12:00.326 | 99.99th=[ 388] 00:12:00.326 write: IOPS=3272, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1001msec); 0 zone resets 00:12:00.326 slat (usec): min=22, max=113, avg=30.70, stdev= 9.67 00:12:00.327 clat (usec): min=79, max=672, avg=108.81, stdev=16.07 00:12:00.327 lat (usec): min=116, max=696, avg=139.51, stdev=20.60 00:12:00.327 clat percentiles (usec): 00:12:00.327 | 1.00th=[ 95], 5.00th=[ 97], 10.00th=[ 98], 20.00th=[ 101], 00:12:00.327 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 110], 00:12:00.327 | 70.00th=[ 112], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 128], 00:12:00.327 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 322], 99.95th=[ 355], 00:12:00.327 | 99.99th=[ 676] 00:12:00.327 bw ( KiB/s): min=12824, max=12824, per=97.96%, avg=12824.00, stdev= 0.00, samples=1 00:12:00.327 iops : min= 3206, max= 3206, avg=3206.00, stdev= 0.00, samples=1 00:12:00.327 lat (usec) : 100=9.18%, 250=90.61%, 500=0.19%, 750=0.02% 00:12:00.327 cpu : usr=3.30%, sys=12.40%, ctx=6348, majf=0, minf=2 00:12:00.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:00.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:00.327 issued rwts: total=3072,3276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:00.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:00.327 00:12:00.327 Run status group 0 (all jobs): 00:12:00.327 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:12:00.327 WRITE: bw=12.8MiB/s (13.4MB/s), 12.8MiB/s-12.8MiB/s (13.4MB/s-13.4MB/s), io=12.8MiB (13.4MB), run=1001-1001msec 00:12:00.327 00:12:00.327 Disk stats (read/write): 00:12:00.327 nvme0n1: ios=2698/3072, merge=0/0, ticks=436/374, in_queue=810, util=91.28% 00:12:00.327 22:06:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:00.585 rmmod nvme_tcp 00:12:00.585 rmmod nvme_fabrics 00:12:00.585 rmmod nvme_keyring 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 76484 ']' 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 76484 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 76484 ']' 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 76484 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76484 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:00.585 killing process with pid 76484 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76484' 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 76484 00:12:00.585 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 76484 00:12:00.843 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:00.843 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:00.843 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:00.843 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:00.843 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:00.843 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.843 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:00.843 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.843 22:06:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:00.843 00:12:00.843 real 0m5.839s 00:12:00.843 user 0m19.719s 00:12:00.843 sys 0m1.390s 00:12:00.843 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:00.843 22:06:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:00.843 ************************************ 00:12:00.843 END TEST nvmf_nmic 00:12:00.843 ************************************ 00:12:00.843 22:06:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:00.843 22:06:47 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:00.843 22:06:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:00.843 22:06:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.843 22:06:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:00.843 ************************************ 00:12:00.843 START TEST nvmf_fio_target 00:12:00.843 ************************************ 00:12:00.843 22:06:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:01.100 * Looking for test storage... 00:12:01.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:01.101 Cannot find device "nvmf_tgt_br" 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:01.101 Cannot find device "nvmf_tgt_br2" 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:01.101 Cannot find device "nvmf_tgt_br" 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:01.101 Cannot find device "nvmf_tgt_br2" 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:01.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:01.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:01.101 22:06:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:01.101 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:01.101 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:01.101 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:01.101 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:01.358 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:01.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:12:01.358 00:12:01.358 --- 10.0.0.2 ping statistics --- 00:12:01.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.359 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:01.359 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:01.359 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:12:01.359 00:12:01.359 --- 10.0.0.3 ping statistics --- 00:12:01.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.359 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:01.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:12:01.359 00:12:01.359 --- 10.0.0.1 ping statistics --- 00:12:01.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.359 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=76767 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 76767 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 76767 ']' 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.359 22:06:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.359 [2024-07-15 22:06:48.260304] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:12:01.359 [2024-07-15 22:06:48.260409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.616 [2024-07-15 22:06:48.394618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.616 [2024-07-15 22:06:48.465568] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.616 [2024-07-15 22:06:48.465644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.616 [2024-07-15 22:06:48.465657] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.616 [2024-07-15 22:06:48.465666] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.616 [2024-07-15 22:06:48.465674] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.616 [2024-07-15 22:06:48.465835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.616 [2024-07-15 22:06:48.465882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.616 [2024-07-15 22:06:48.466001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.616 [2024-07-15 22:06:48.466520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.873 22:06:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:01.873 22:06:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:12:01.873 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:01.873 22:06:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:01.873 22:06:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.873 22:06:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.873 22:06:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:02.130 [2024-07-15 22:06:48.925305] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.130 22:06:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:02.696 22:06:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:02.696 22:06:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:02.696 22:06:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:02.696 22:06:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.264 22:06:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:03.264 22:06:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.521 22:06:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:03.521 22:06:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:03.779 22:06:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:04.038 22:06:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:04.038 22:06:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:04.602 22:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:04.602 22:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:04.860 22:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:04.860 22:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:05.127 22:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:05.396 22:06:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:05.396 22:06:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:05.969 22:06:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:05.969 22:06:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:06.227 22:06:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.486 [2024-07-15 22:06:53.303811] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.486 22:06:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:06.744 22:06:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:07.002 22:06:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.260 22:06:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:07.260 22:06:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:07.260 22:06:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.260 22:06:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:07.260 22:06:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:07.260 22:06:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:09.212 22:06:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:09.212 22:06:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:09.213 22:06:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.213 22:06:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:09.213 22:06:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.213 22:06:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:09.213 22:06:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:09.213 [global] 00:12:09.213 thread=1 00:12:09.213 invalidate=1 00:12:09.213 rw=write 00:12:09.213 time_based=1 00:12:09.213 runtime=1 00:12:09.213 ioengine=libaio 00:12:09.213 direct=1 00:12:09.213 bs=4096 00:12:09.213 iodepth=1 00:12:09.213 norandommap=0 00:12:09.213 numjobs=1 00:12:09.213 00:12:09.213 verify_dump=1 00:12:09.213 verify_backlog=512 00:12:09.213 verify_state_save=0 00:12:09.213 do_verify=1 00:12:09.213 verify=crc32c-intel 00:12:09.213 [job0] 00:12:09.213 filename=/dev/nvme0n1 00:12:09.213 [job1] 00:12:09.213 filename=/dev/nvme0n2 00:12:09.213 [job2] 00:12:09.213 filename=/dev/nvme0n3 00:12:09.213 [job3] 00:12:09.213 filename=/dev/nvme0n4 00:12:09.213 Could not set queue depth (nvme0n1) 00:12:09.213 Could not set queue depth (nvme0n2) 00:12:09.213 Could not set queue depth (nvme0n3) 00:12:09.213 Could not set queue depth (nvme0n4) 00:12:09.470 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.470 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.470 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.470 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.470 fio-3.35 00:12:09.470 Starting 4 threads 00:12:10.853 00:12:10.853 job0: (groupid=0, jobs=1): err= 0: pid=77057: Mon Jul 15 22:06:57 2024 00:12:10.853 read: IOPS=2061, BW=8248KiB/s (8446kB/s)(8256KiB/1001msec) 00:12:10.853 slat (nsec): min=15796, max=54953, avg=23781.44, stdev=5704.44 00:12:10.853 clat (usec): min=142, max=2469, avg=217.99, stdev=61.34 00:12:10.853 lat (usec): min=162, max=2499, avg=241.77, stdev=61.93 00:12:10.853 clat percentiles (usec): 00:12:10.853 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 169], 00:12:10.853 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:12:10.853 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 262], 00:12:10.853 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 408], 99.95th=[ 412], 00:12:10.853 | 99.99th=[ 2474] 00:12:10.853 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:10.853 slat (usec): min=21, max=170, avg=33.38, stdev= 9.83 00:12:10.853 clat (usec): min=100, max=732, avg=157.68, stdev=33.97 00:12:10.853 lat (usec): min=124, max=780, avg=191.06, stdev=38.57 00:12:10.853 clat percentiles (usec): 00:12:10.853 | 1.00th=[ 106], 5.00th=[ 112], 10.00th=[ 117], 20.00th=[ 124], 00:12:10.853 | 30.00th=[ 133], 40.00th=[ 155], 50.00th=[ 163], 60.00th=[ 169], 00:12:10.853 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 200], 00:12:10.853 | 99.00th=[ 245], 99.50th=[ 297], 99.90th=[ 355], 99.95th=[ 367], 00:12:10.853 | 99.99th=[ 734] 00:12:10.853 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:12:10.853 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:10.853 lat (usec) : 250=93.60%, 500=6.36%, 750=0.02% 00:12:10.853 lat (msec) : 4=0.02% 00:12:10.853 cpu : usr=2.40%, sys=10.20%, ctx=4625, majf=0, minf=13 00:12:10.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.853 issued rwts: total=2064,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.853 job1: (groupid=0, jobs=1): err= 0: pid=77058: Mon Jul 15 22:06:57 2024 00:12:10.853 read: IOPS=1094, BW=4380KiB/s (4485kB/s)(4384KiB/1001msec) 00:12:10.853 slat (nsec): min=18153, max=76693, avg=26498.65, stdev=6417.22 00:12:10.853 clat (usec): min=249, max=1051, avg=411.26, stdev=98.39 00:12:10.853 lat (usec): min=276, max=1112, avg=437.76, stdev=100.34 00:12:10.853 clat percentiles (usec): 00:12:10.853 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 285], 00:12:10.853 | 30.00th=[ 310], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 457], 00:12:10.853 | 70.00th=[ 469], 80.00th=[ 478], 90.00th=[ 494], 95.00th=[ 506], 00:12:10.853 | 99.00th=[ 676], 99.50th=[ 734], 99.90th=[ 816], 99.95th=[ 1057], 00:12:10.853 | 99.99th=[ 1057] 00:12:10.853 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:10.853 slat (usec): min=21, max=280, avg=39.57, stdev=11.30 00:12:10.853 clat (usec): min=15, max=1208, avg=293.88, stdev=81.24 00:12:10.853 lat (usec): min=158, max=1253, avg=333.45, stdev=83.04 00:12:10.853 clat percentiles (usec): 00:12:10.853 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 200], 00:12:10.853 | 30.00th=[ 229], 40.00th=[ 297], 50.00th=[ 318], 60.00th=[ 330], 00:12:10.853 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 379], 00:12:10.853 | 99.00th=[ 416], 99.50th=[ 490], 99.90th=[ 1156], 99.95th=[ 1205], 00:12:10.853 | 99.99th=[ 1205] 00:12:10.853 bw ( KiB/s): min= 5112, max= 5112, per=15.62%, avg=5112.00, stdev= 0.00, samples=1 00:12:10.853 iops : min= 1278, max= 1278, avg=1278.00, stdev= 0.00, samples=1 00:12:10.853 lat (usec) : 20=0.04%, 250=18.96%, 500=77.96%, 750=2.70%, 1000=0.19% 00:12:10.853 lat (msec) : 2=0.15% 00:12:10.853 cpu : usr=1.70%, sys=6.70%, ctx=2633, majf=0, minf=6 00:12:10.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.853 issued rwts: total=1096,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.853 job2: (groupid=0, jobs=1): err= 0: pid=77059: Mon Jul 15 22:06:57 2024 00:12:10.853 read: IOPS=2248, BW=8995KiB/s (9211kB/s)(9004KiB/1001msec) 00:12:10.853 slat (nsec): min=14479, max=97622, avg=22207.72, stdev=4503.02 00:12:10.853 clat (usec): min=118, max=825, avg=203.91, stdev=39.20 00:12:10.853 lat (usec): min=169, max=851, avg=226.12, stdev=40.80 00:12:10.853 clat percentiles (usec): 00:12:10.853 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:12:10.853 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 200], 00:12:10.853 | 70.00th=[ 217], 80.00th=[ 241], 90.00th=[ 262], 95.00th=[ 273], 00:12:10.853 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 441], 99.95th=[ 537], 00:12:10.853 | 99.99th=[ 824] 00:12:10.853 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:10.853 slat (usec): min=21, max=4074, avg=33.47, stdev=80.14 00:12:10.853 clat (usec): min=110, max=2673, avg=154.04, stdev=59.44 00:12:10.853 lat (usec): min=132, max=4267, avg=187.51, stdev=100.52 00:12:10.853 clat percentiles (usec): 00:12:10.854 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 125], 20.00th=[ 130], 00:12:10.854 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 145], 60.00th=[ 153], 00:12:10.854 | 70.00th=[ 163], 80.00th=[ 178], 90.00th=[ 192], 95.00th=[ 202], 00:12:10.854 | 99.00th=[ 241], 99.50th=[ 265], 99.90th=[ 449], 99.95th=[ 898], 00:12:10.854 | 99.99th=[ 2671] 00:12:10.854 bw ( KiB/s): min=12288, max=12288, per=37.54%, avg=12288.00, stdev= 0.00, samples=1 00:12:10.854 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:10.854 lat (usec) : 250=92.60%, 500=7.32%, 750=0.02%, 1000=0.04% 00:12:10.854 lat (msec) : 4=0.02% 00:12:10.854 cpu : usr=2.40%, sys=9.90%, ctx=4811, majf=0, minf=5 00:12:10.854 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.854 issued rwts: total=2251,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.854 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.854 job3: (groupid=0, jobs=1): err= 0: pid=77060: Mon Jul 15 22:06:57 2024 00:12:10.854 read: IOPS=1142, BW=4571KiB/s (4681kB/s)(4576KiB/1001msec) 00:12:10.854 slat (nsec): min=18482, max=55735, avg=26778.83, stdev=5681.99 00:12:10.854 clat (usec): min=181, max=1037, avg=396.50, stdev=96.35 00:12:10.854 lat (usec): min=201, max=1060, avg=423.28, stdev=96.68 00:12:10.854 clat percentiles (usec): 00:12:10.854 | 1.00th=[ 249], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 281], 00:12:10.854 | 30.00th=[ 297], 40.00th=[ 408], 50.00th=[ 441], 60.00th=[ 453], 00:12:10.854 | 70.00th=[ 465], 80.00th=[ 474], 90.00th=[ 486], 95.00th=[ 498], 00:12:10.854 | 99.00th=[ 553], 99.50th=[ 709], 99.90th=[ 955], 99.95th=[ 1037], 00:12:10.854 | 99.99th=[ 1037] 00:12:10.854 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:10.854 slat (usec): min=27, max=122, avg=39.97, stdev= 9.87 00:12:10.854 clat (usec): min=113, max=2528, avg=290.53, stdev=97.07 00:12:10.854 lat (usec): min=149, max=2571, avg=330.50, stdev=98.57 00:12:10.854 clat percentiles (usec): 00:12:10.854 | 1.00th=[ 133], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 198], 00:12:10.854 | 30.00th=[ 212], 40.00th=[ 297], 50.00th=[ 318], 60.00th=[ 330], 00:12:10.854 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 367], 95.00th=[ 375], 00:12:10.854 | 99.00th=[ 404], 99.50th=[ 494], 99.90th=[ 1205], 99.95th=[ 2540], 00:12:10.854 | 99.99th=[ 2540] 00:12:10.854 bw ( KiB/s): min= 5208, max= 5208, per=15.91%, avg=5208.00, stdev= 0.00, samples=1 00:12:10.854 iops : min= 1302, max= 1302, avg=1302.00, stdev= 0.00, samples=1 00:12:10.854 lat (usec) : 250=20.37%, 500=77.65%, 750=1.72%, 1000=0.15% 00:12:10.854 lat (msec) : 2=0.07%, 4=0.04% 00:12:10.854 cpu : usr=2.00%, sys=6.70%, ctx=2680, majf=0, minf=11 00:12:10.854 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.854 issued rwts: total=1144,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.854 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.854 00:12:10.854 Run status group 0 (all jobs): 00:12:10.854 READ: bw=25.6MiB/s (26.8MB/s), 4380KiB/s-8995KiB/s (4485kB/s-9211kB/s), io=25.6MiB (26.8MB), run=1001-1001msec 00:12:10.854 WRITE: bw=32.0MiB/s (33.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:12:10.854 00:12:10.854 Disk stats (read/write): 00:12:10.854 nvme0n1: ios=1773/2048, merge=0/0, ticks=455/369, in_queue=824, util=88.25% 00:12:10.854 nvme0n2: ios=1055/1028, merge=0/0, ticks=475/367, in_queue=842, util=88.50% 00:12:10.854 nvme0n3: ios=2054/2176, merge=0/0, ticks=422/351, in_queue=773, util=89.00% 00:12:10.854 nvme0n4: ios=1024/1074, merge=0/0, ticks=420/365, in_queue=785, util=89.63% 00:12:10.854 22:06:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:10.854 [global] 00:12:10.854 thread=1 00:12:10.854 invalidate=1 00:12:10.854 rw=randwrite 00:12:10.854 time_based=1 00:12:10.854 runtime=1 00:12:10.854 ioengine=libaio 00:12:10.854 direct=1 00:12:10.854 bs=4096 00:12:10.854 iodepth=1 00:12:10.854 norandommap=0 00:12:10.854 numjobs=1 00:12:10.854 00:12:10.854 verify_dump=1 00:12:10.854 verify_backlog=512 00:12:10.854 verify_state_save=0 00:12:10.854 do_verify=1 00:12:10.854 verify=crc32c-intel 00:12:10.854 [job0] 00:12:10.854 filename=/dev/nvme0n1 00:12:10.854 [job1] 00:12:10.854 filename=/dev/nvme0n2 00:12:10.854 [job2] 00:12:10.854 filename=/dev/nvme0n3 00:12:10.854 [job3] 00:12:10.854 filename=/dev/nvme0n4 00:12:10.854 Could not set queue depth (nvme0n1) 00:12:10.854 Could not set queue depth (nvme0n2) 00:12:10.854 Could not set queue depth (nvme0n3) 00:12:10.854 Could not set queue depth (nvme0n4) 00:12:10.854 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.854 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.854 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.854 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.854 fio-3.35 00:12:10.854 Starting 4 threads 00:12:12.265 00:12:12.265 job0: (groupid=0, jobs=1): err= 0: pid=77119: Mon Jul 15 22:06:58 2024 00:12:12.265 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:12.265 slat (nsec): min=16579, max=71285, avg=27522.09, stdev=6026.47 00:12:12.265 clat (usec): min=180, max=1574, avg=334.32, stdev=78.15 00:12:12.265 lat (usec): min=209, max=1605, avg=361.84, stdev=79.12 00:12:12.265 clat percentiles (usec): 00:12:12.265 | 1.00th=[ 249], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:12:12.265 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 314], 60.00th=[ 343], 00:12:12.265 | 70.00th=[ 363], 80.00th=[ 383], 90.00th=[ 416], 95.00th=[ 461], 00:12:12.265 | 99.00th=[ 545], 99.50th=[ 627], 99.90th=[ 1254], 99.95th=[ 1582], 00:12:12.265 | 99.99th=[ 1582] 00:12:12.265 write: IOPS=1620, BW=6482KiB/s (6637kB/s)(6488KiB/1001msec); 0 zone resets 00:12:12.265 slat (usec): min=23, max=111, avg=39.37, stdev= 7.61 00:12:12.265 clat (usec): min=127, max=1536, avg=228.28, stdev=55.88 00:12:12.265 lat (usec): min=163, max=1590, avg=267.65, stdev=57.09 00:12:12.265 clat percentiles (usec): 00:12:12.265 | 1.00th=[ 163], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:12:12.265 | 30.00th=[ 198], 40.00th=[ 210], 50.00th=[ 221], 60.00th=[ 235], 00:12:12.265 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 277], 95.00th=[ 302], 00:12:12.265 | 99.00th=[ 363], 99.50th=[ 441], 99.90th=[ 725], 99.95th=[ 1532], 00:12:12.265 | 99.99th=[ 1532] 00:12:12.265 bw ( KiB/s): min= 8192, max= 8192, per=25.41%, avg=8192.00, stdev= 0.00, samples=1 00:12:12.265 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:12.265 lat (usec) : 250=39.14%, 500=59.34%, 750=1.39%, 1000=0.03% 00:12:12.265 lat (msec) : 2=0.09% 00:12:12.265 cpu : usr=2.20%, sys=7.90%, ctx=3169, majf=0, minf=11 00:12:12.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.265 issued rwts: total=1536,1622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.265 job1: (groupid=0, jobs=1): err= 0: pid=77120: Mon Jul 15 22:06:58 2024 00:12:12.265 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:12.265 slat (nsec): min=15214, max=87761, avg=27026.90, stdev=8698.15 00:12:12.265 clat (usec): min=161, max=1467, avg=318.67, stdev=71.27 00:12:12.265 lat (usec): min=178, max=1497, avg=345.69, stdev=72.94 00:12:12.265 clat percentiles (usec): 00:12:12.265 | 1.00th=[ 186], 5.00th=[ 245], 10.00th=[ 258], 20.00th=[ 273], 00:12:12.265 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 322], 00:12:12.265 | 70.00th=[ 343], 80.00th=[ 363], 90.00th=[ 392], 95.00th=[ 416], 00:12:12.265 | 99.00th=[ 461], 99.50th=[ 562], 99.90th=[ 1237], 99.95th=[ 1467], 00:12:12.265 | 99.99th=[ 1467] 00:12:12.265 write: IOPS=1757, BW=7029KiB/s (7198kB/s)(7036KiB/1001msec); 0 zone resets 00:12:12.265 slat (usec): min=19, max=255, avg=35.30, stdev=19.65 00:12:12.265 clat (usec): min=3, max=1198, avg=225.96, stdev=57.08 00:12:12.265 lat (usec): min=124, max=1234, avg=261.27, stdev=58.05 00:12:12.265 clat percentiles (usec): 00:12:12.265 | 1.00th=[ 111], 5.00th=[ 155], 10.00th=[ 182], 20.00th=[ 196], 00:12:12.265 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 231], 00:12:12.265 | 70.00th=[ 243], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 293], 00:12:12.265 | 99.00th=[ 355], 99.50th=[ 523], 99.90th=[ 1004], 99.95th=[ 1205], 00:12:12.265 | 99.99th=[ 1205] 00:12:12.265 bw ( KiB/s): min= 8192, max= 8192, per=25.41%, avg=8192.00, stdev= 0.00, samples=1 00:12:12.265 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:12.265 lat (usec) : 4=0.06%, 10=0.03%, 50=0.06%, 100=0.06%, 250=43.31% 00:12:12.265 lat (usec) : 500=55.87%, 750=0.39%, 1000=0.09% 00:12:12.265 lat (msec) : 2=0.12% 00:12:12.265 cpu : usr=1.70%, sys=8.00%, ctx=3322, majf=0, minf=15 00:12:12.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.265 issued rwts: total=1536,1759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.265 job2: (groupid=0, jobs=1): err= 0: pid=77121: Mon Jul 15 22:06:58 2024 00:12:12.265 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:12.265 slat (nsec): min=18871, max=57944, avg=24086.60, stdev=3937.82 00:12:12.265 clat (usec): min=150, max=2746, avg=180.20, stdev=54.77 00:12:12.265 lat (usec): min=172, max=2771, avg=204.28, stdev=54.96 00:12:12.265 clat percentiles (usec): 00:12:12.265 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:12:12.265 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:12:12.265 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 229], 00:12:12.265 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 289], 99.95th=[ 388], 00:12:12.265 | 99.99th=[ 2737] 00:12:12.265 write: IOPS=2637, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec); 0 zone resets 00:12:12.265 slat (usec): min=28, max=401, avg=35.19, stdev= 8.73 00:12:12.265 clat (usec): min=110, max=353, avg=140.31, stdev=17.43 00:12:12.265 lat (usec): min=139, max=553, avg=175.51, stdev=19.56 00:12:12.265 clat percentiles (usec): 00:12:12.265 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:12:12.265 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:12:12.265 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 161], 95.00th=[ 176], 00:12:12.265 | 99.00th=[ 198], 99.50th=[ 206], 99.90th=[ 277], 99.95th=[ 326], 00:12:12.265 | 99.99th=[ 355] 00:12:12.265 bw ( KiB/s): min=12288, max=12288, per=38.11%, avg=12288.00, stdev= 0.00, samples=1 00:12:12.265 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:12.265 lat (usec) : 250=99.23%, 500=0.75% 00:12:12.265 lat (msec) : 4=0.02% 00:12:12.265 cpu : usr=2.90%, sys=11.50%, ctx=5200, majf=0, minf=15 00:12:12.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.265 issued rwts: total=2560,2640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.265 job3: (groupid=0, jobs=1): err= 0: pid=77122: Mon Jul 15 22:06:58 2024 00:12:12.265 read: IOPS=1865, BW=7461KiB/s (7640kB/s)(7468KiB/1001msec) 00:12:12.265 slat (usec): min=28, max=107, avg=30.64, stdev= 3.59 00:12:12.265 clat (usec): min=174, max=730, avg=245.93, stdev=23.01 00:12:12.265 lat (usec): min=246, max=762, avg=276.57, stdev=23.07 00:12:12.265 clat percentiles (usec): 00:12:12.265 | 1.00th=[ 223], 5.00th=[ 227], 10.00th=[ 229], 20.00th=[ 233], 00:12:12.265 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:12:12.265 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:12:12.265 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 717], 99.95th=[ 734], 00:12:12.265 | 99.99th=[ 734] 00:12:12.265 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:12.265 slat (usec): min=38, max=111, avg=41.61, stdev= 4.22 00:12:12.265 clat (usec): min=158, max=914, avg=188.08, stdev=20.83 00:12:12.265 lat (usec): min=199, max=954, avg=229.69, stdev=21.24 00:12:12.265 clat percentiles (usec): 00:12:12.265 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:12:12.265 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:12:12.265 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 204], 95.00th=[ 210], 00:12:12.265 | 99.00th=[ 229], 99.50th=[ 241], 99.90th=[ 293], 99.95th=[ 297], 00:12:12.265 | 99.99th=[ 914] 00:12:12.265 bw ( KiB/s): min= 8192, max= 8192, per=25.41%, avg=8192.00, stdev= 0.00, samples=1 00:12:12.265 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:12.265 lat (usec) : 250=85.06%, 500=14.84%, 750=0.08%, 1000=0.03% 00:12:12.265 cpu : usr=2.30%, sys=11.40%, ctx=3915, majf=0, minf=6 00:12:12.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.265 issued rwts: total=1867,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.265 00:12:12.265 Run status group 0 (all jobs): 00:12:12.265 READ: bw=29.3MiB/s (30.7MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=29.3MiB (30.7MB), run=1001-1001msec 00:12:12.265 WRITE: bw=31.5MiB/s (33.0MB/s), 6482KiB/s-10.3MiB/s (6637kB/s-10.8MB/s), io=31.5MiB (33.1MB), run=1001-1001msec 00:12:12.265 00:12:12.265 Disk stats (read/write): 00:12:12.265 nvme0n1: ios=1313/1536, merge=0/0, ticks=444/376, in_queue=820, util=87.45% 00:12:12.265 nvme0n2: ios=1376/1536, merge=0/0, ticks=449/353, in_queue=802, util=88.44% 00:12:12.265 nvme0n3: ios=2054/2418, merge=0/0, ticks=391/361, in_queue=752, util=89.10% 00:12:12.265 nvme0n4: ios=1536/1820, merge=0/0, ticks=388/370, in_queue=758, util=89.74% 00:12:12.265 22:06:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:12.265 [global] 00:12:12.265 thread=1 00:12:12.265 invalidate=1 00:12:12.265 rw=write 00:12:12.265 time_based=1 00:12:12.265 runtime=1 00:12:12.265 ioengine=libaio 00:12:12.265 direct=1 00:12:12.265 bs=4096 00:12:12.265 iodepth=128 00:12:12.265 norandommap=0 00:12:12.265 numjobs=1 00:12:12.265 00:12:12.265 verify_dump=1 00:12:12.265 verify_backlog=512 00:12:12.265 verify_state_save=0 00:12:12.265 do_verify=1 00:12:12.265 verify=crc32c-intel 00:12:12.265 [job0] 00:12:12.265 filename=/dev/nvme0n1 00:12:12.265 [job1] 00:12:12.265 filename=/dev/nvme0n2 00:12:12.265 [job2] 00:12:12.265 filename=/dev/nvme0n3 00:12:12.266 [job3] 00:12:12.266 filename=/dev/nvme0n4 00:12:12.266 Could not set queue depth (nvme0n1) 00:12:12.266 Could not set queue depth (nvme0n2) 00:12:12.266 Could not set queue depth (nvme0n3) 00:12:12.266 Could not set queue depth (nvme0n4) 00:12:12.266 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:12.266 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:12.266 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:12.266 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:12.266 fio-3.35 00:12:12.266 Starting 4 threads 00:12:13.640 00:12:13.640 job0: (groupid=0, jobs=1): err= 0: pid=77176: Mon Jul 15 22:07:00 2024 00:12:13.640 read: IOPS=1637, BW=6550KiB/s (6708kB/s)(6616KiB/1010msec) 00:12:13.640 slat (usec): min=3, max=14495, avg=188.30, stdev=1024.96 00:12:13.640 clat (usec): min=5362, max=43483, avg=22140.99, stdev=4561.26 00:12:13.640 lat (usec): min=9752, max=43493, avg=22329.29, stdev=4653.83 00:12:13.640 clat percentiles (usec): 00:12:13.640 | 1.00th=[10028], 5.00th=[17957], 10.00th=[19006], 20.00th=[20055], 00:12:13.640 | 30.00th=[20579], 40.00th=[20841], 50.00th=[21365], 60.00th=[21627], 00:12:13.640 | 70.00th=[22152], 80.00th=[23200], 90.00th=[26870], 95.00th=[30016], 00:12:13.640 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:12:13.640 | 99.99th=[43254] 00:12:13.640 write: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec); 0 zone resets 00:12:13.640 slat (usec): min=12, max=25689, avg=330.74, stdev=1523.15 00:12:13.640 clat (usec): min=19610, max=88056, avg=43693.76, stdev=15144.53 00:12:13.640 lat (usec): min=19635, max=88104, avg=44024.50, stdev=15247.55 00:12:13.640 clat percentiles (usec): 00:12:13.640 | 1.00th=[21890], 5.00th=[26870], 10.00th=[28443], 20.00th=[30016], 00:12:13.640 | 30.00th=[31851], 40.00th=[33424], 50.00th=[39584], 60.00th=[43254], 00:12:13.640 | 70.00th=[52691], 80.00th=[59507], 90.00th=[66847], 95.00th=[71828], 00:12:13.640 | 99.00th=[76022], 99.50th=[79168], 99.90th=[80217], 99.95th=[84411], 00:12:13.640 | 99.99th=[87557] 00:12:13.640 bw ( KiB/s): min= 8112, max= 8208, per=14.25%, avg=8160.00, stdev=67.88, samples=2 00:12:13.640 iops : min= 2028, max= 2052, avg=2040.00, stdev=16.97, samples=2 00:12:13.640 lat (msec) : 10=0.38%, 20=7.64%, 50=73.26%, 100=18.72% 00:12:13.640 cpu : usr=1.78%, sys=5.45%, ctx=258, majf=0, minf=9 00:12:13.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:12:13.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:13.640 issued rwts: total=1654,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:13.640 job1: (groupid=0, jobs=1): err= 0: pid=77177: Mon Jul 15 22:07:00 2024 00:12:13.640 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:12:13.640 slat (usec): min=6, max=4292, avg=85.68, stdev=389.46 00:12:13.640 clat (usec): min=6816, max=17115, avg=11492.22, stdev=1263.36 00:12:13.640 lat (usec): min=8214, max=17126, avg=11577.90, stdev=1223.65 00:12:13.640 clat percentiles (usec): 00:12:13.640 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[10683], 00:12:13.640 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:12:13.640 | 70.00th=[11600], 80.00th=[12649], 90.00th=[13435], 95.00th=[13829], 00:12:13.640 | 99.00th=[14877], 99.50th=[15270], 99.90th=[16909], 99.95th=[17171], 00:12:13.640 | 99.99th=[17171] 00:12:13.640 write: IOPS=5687, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1003msec); 0 zone resets 00:12:13.640 slat (usec): min=8, max=3148, avg=82.68, stdev=324.71 00:12:13.640 clat (usec): min=2254, max=15466, avg=10856.26, stdev=1491.68 00:12:13.640 lat (usec): min=2280, max=15498, avg=10938.94, stdev=1495.71 00:12:13.640 clat percentiles (usec): 00:12:13.640 | 1.00th=[ 6325], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:12:13.640 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11076], 00:12:13.640 | 70.00th=[11338], 80.00th=[11863], 90.00th=[12911], 95.00th=[13435], 00:12:13.640 | 99.00th=[14746], 99.50th=[15270], 99.90th=[15401], 99.95th=[15401], 00:12:13.640 | 99.99th=[15401] 00:12:13.640 bw ( KiB/s): min=21832, max=23224, per=39.34%, avg=22528.00, stdev=984.29, samples=2 00:12:13.640 iops : min= 5458, max= 5806, avg=5632.00, stdev=246.07, samples=2 00:12:13.640 lat (msec) : 4=0.16%, 10=18.44%, 20=81.41% 00:12:13.640 cpu : usr=4.79%, sys=15.47%, ctx=672, majf=0, minf=1 00:12:13.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:13.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:13.640 issued rwts: total=5632,5705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:13.640 job2: (groupid=0, jobs=1): err= 0: pid=77178: Mon Jul 15 22:07:00 2024 00:12:13.640 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:12:13.640 slat (usec): min=6, max=7705, avg=122.67, stdev=618.46 00:12:13.640 clat (usec): min=8714, max=23633, avg=15513.59, stdev=2383.99 00:12:13.640 lat (usec): min=8732, max=23656, avg=15636.26, stdev=2435.79 00:12:13.640 clat percentiles (usec): 00:12:13.640 | 1.00th=[10028], 5.00th=[11338], 10.00th=[12780], 20.00th=[13435], 00:12:13.640 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15533], 60.00th=[16319], 00:12:13.640 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18220], 95.00th=[19268], 00:12:13.640 | 99.00th=[21627], 99.50th=[21890], 99.90th=[22414], 99.95th=[22414], 00:12:13.640 | 99.99th=[23725] 00:12:13.640 write: IOPS=4277, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1007msec); 0 zone resets 00:12:13.640 slat (usec): min=9, max=6817, avg=107.45, stdev=450.58 00:12:13.640 clat (usec): min=6545, max=26825, avg=14810.98, stdev=2586.39 00:12:13.640 lat (usec): min=6559, max=26846, avg=14918.43, stdev=2624.01 00:12:13.640 clat percentiles (usec): 00:12:13.640 | 1.00th=[ 9110], 5.00th=[11338], 10.00th=[12256], 20.00th=[12911], 00:12:13.640 | 30.00th=[13304], 40.00th=[13960], 50.00th=[14484], 60.00th=[14877], 00:12:13.640 | 70.00th=[15664], 80.00th=[16581], 90.00th=[17957], 95.00th=[19792], 00:12:13.640 | 99.00th=[22676], 99.50th=[24773], 99.90th=[26870], 99.95th=[26870], 00:12:13.640 | 99.99th=[26870] 00:12:13.640 bw ( KiB/s): min=16168, max=17272, per=29.20%, avg=16720.00, stdev=780.65, samples=2 00:12:13.640 iops : min= 4042, max= 4318, avg=4180.00, stdev=195.16, samples=2 00:12:13.640 lat (msec) : 10=1.45%, 20=94.28%, 50=4.27% 00:12:13.641 cpu : usr=3.78%, sys=11.53%, ctx=538, majf=0, minf=1 00:12:13.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:13.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:13.641 issued rwts: total=4096,4307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:13.641 job3: (groupid=0, jobs=1): err= 0: pid=77179: Mon Jul 15 22:07:00 2024 00:12:13.641 read: IOPS=2017, BW=8071KiB/s (8265kB/s)(8192KiB/1015msec) 00:12:13.641 slat (usec): min=4, max=14287, avg=186.96, stdev=976.16 00:12:13.641 clat (usec): min=13299, max=52150, avg=22335.81, stdev=7101.43 00:12:13.641 lat (usec): min=13311, max=52189, avg=22522.77, stdev=7185.36 00:12:13.641 clat percentiles (usec): 00:12:13.641 | 1.00th=[13960], 5.00th=[15664], 10.00th=[17695], 20.00th=[18482], 00:12:13.641 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[20317], 00:12:13.641 | 70.00th=[20841], 80.00th=[24249], 90.00th=[35390], 95.00th=[40109], 00:12:13.641 | 99.00th=[47973], 99.50th=[47973], 99.90th=[47973], 99.95th=[50594], 00:12:13.641 | 99.99th=[52167] 00:12:13.641 write: IOPS=2434, BW=9738KiB/s (9972kB/s)(9884KiB/1015msec); 0 zone resets 00:12:13.641 slat (usec): min=10, max=15232, avg=241.74, stdev=971.00 00:12:13.641 clat (usec): min=14136, max=49956, avg=33177.21, stdev=7317.72 00:12:13.641 lat (usec): min=15193, max=49985, avg=33418.95, stdev=7373.34 00:12:13.641 clat percentiles (usec): 00:12:13.641 | 1.00th=[17433], 5.00th=[22676], 10.00th=[25297], 20.00th=[26870], 00:12:13.641 | 30.00th=[28705], 40.00th=[30540], 50.00th=[31851], 60.00th=[33162], 00:12:13.641 | 70.00th=[35914], 80.00th=[40633], 90.00th=[44303], 95.00th=[46924], 00:12:13.641 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:12:13.641 | 99.99th=[50070] 00:12:13.641 bw ( KiB/s): min= 9226, max= 9544, per=16.39%, avg=9385.00, stdev=224.86, samples=2 00:12:13.641 iops : min= 2306, max= 2386, avg=2346.00, stdev=56.57, samples=2 00:12:13.641 lat (msec) : 20=27.53%, 50=72.43%, 100=0.04% 00:12:13.641 cpu : usr=2.07%, sys=6.90%, ctx=325, majf=0, minf=4 00:12:13.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:12:13.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:13.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:13.641 issued rwts: total=2048,2471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:13.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:13.641 00:12:13.641 Run status group 0 (all jobs): 00:12:13.641 READ: bw=51.7MiB/s (54.2MB/s), 6550KiB/s-21.9MiB/s (6708kB/s-23.0MB/s), io=52.5MiB (55.0MB), run=1003-1015msec 00:12:13.641 WRITE: bw=55.9MiB/s (58.6MB/s), 8111KiB/s-22.2MiB/s (8306kB/s-23.3MB/s), io=56.8MiB (59.5MB), run=1003-1015msec 00:12:13.641 00:12:13.641 Disk stats (read/write): 00:12:13.641 nvme0n1: ios=1586/1647, merge=0/0, ticks=11329/23070, in_queue=34399, util=87.98% 00:12:13.641 nvme0n2: ios=4656/5029, merge=0/0, ticks=12292/11702, in_queue=23994, util=88.86% 00:12:13.641 nvme0n3: ios=3496/3584, merge=0/0, ticks=26553/23839, in_queue=50392, util=89.10% 00:12:13.641 nvme0n4: ios=1724/2048, merge=0/0, ticks=19593/32168, in_queue=51761, util=89.74% 00:12:13.641 22:07:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:13.641 [global] 00:12:13.641 thread=1 00:12:13.641 invalidate=1 00:12:13.641 rw=randwrite 00:12:13.641 time_based=1 00:12:13.641 runtime=1 00:12:13.641 ioengine=libaio 00:12:13.641 direct=1 00:12:13.641 bs=4096 00:12:13.641 iodepth=128 00:12:13.641 norandommap=0 00:12:13.641 numjobs=1 00:12:13.641 00:12:13.641 verify_dump=1 00:12:13.641 verify_backlog=512 00:12:13.641 verify_state_save=0 00:12:13.641 do_verify=1 00:12:13.641 verify=crc32c-intel 00:12:13.641 [job0] 00:12:13.641 filename=/dev/nvme0n1 00:12:13.641 [job1] 00:12:13.641 filename=/dev/nvme0n2 00:12:13.641 [job2] 00:12:13.641 filename=/dev/nvme0n3 00:12:13.641 [job3] 00:12:13.641 filename=/dev/nvme0n4 00:12:13.641 Could not set queue depth (nvme0n1) 00:12:13.641 Could not set queue depth (nvme0n2) 00:12:13.641 Could not set queue depth (nvme0n3) 00:12:13.641 Could not set queue depth (nvme0n4) 00:12:13.641 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:13.641 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:13.641 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:13.641 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:13.641 fio-3.35 00:12:13.641 Starting 4 threads 00:12:15.039 00:12:15.039 job0: (groupid=0, jobs=1): err= 0: pid=77232: Mon Jul 15 22:07:01 2024 00:12:15.039 read: IOPS=4544, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1014msec) 00:12:15.039 slat (usec): min=4, max=10920, avg=111.02, stdev=730.97 00:12:15.039 clat (usec): min=5253, max=25691, avg=14020.65, stdev=3356.93 00:12:15.039 lat (usec): min=5264, max=25735, avg=14131.66, stdev=3396.89 00:12:15.039 clat percentiles (usec): 00:12:15.039 | 1.00th=[ 5866], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10814], 00:12:15.039 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13042], 60.00th=[14353], 00:12:15.039 | 70.00th=[15795], 80.00th=[16450], 90.00th=[18744], 95.00th=[20579], 00:12:15.039 | 99.00th=[23200], 99.50th=[23462], 99.90th=[24511], 99.95th=[24511], 00:12:15.039 | 99.99th=[25822] 00:12:15.039 write: IOPS=4960, BW=19.4MiB/s (20.3MB/s)(19.6MiB/1014msec); 0 zone resets 00:12:15.039 slat (usec): min=4, max=11580, avg=89.97, stdev=458.09 00:12:15.039 clat (usec): min=4224, max=30668, avg=12709.91, stdev=3133.07 00:12:15.039 lat (usec): min=4244, max=30682, avg=12799.88, stdev=3173.25 00:12:15.039 clat percentiles (usec): 00:12:15.039 | 1.00th=[ 5145], 5.00th=[ 6783], 10.00th=[ 8029], 20.00th=[11076], 00:12:15.039 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:12:15.039 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14746], 95.00th=[17433], 00:12:15.039 | 99.00th=[23987], 99.50th=[27395], 99.90th=[30540], 99.95th=[30540], 00:12:15.039 | 99.99th=[30540] 00:12:15.039 bw ( KiB/s): min=18744, max=20480, per=26.37%, avg=19612.00, stdev=1227.54, samples=2 00:12:15.039 iops : min= 4686, max= 5120, avg=4903.00, stdev=306.88, samples=2 00:12:15.039 lat (msec) : 10=10.55%, 20=85.30%, 50=4.15% 00:12:15.039 cpu : usr=4.84%, sys=9.58%, ctx=656, majf=0, minf=7 00:12:15.039 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:15.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.039 issued rwts: total=4608,5030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.039 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.039 job1: (groupid=0, jobs=1): err= 0: pid=77233: Mon Jul 15 22:07:01 2024 00:12:15.039 read: IOPS=4619, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1003msec) 00:12:15.039 slat (usec): min=4, max=5014, avg=100.13, stdev=459.56 00:12:15.039 clat (usec): min=768, max=17149, avg=13200.97, stdev=1553.80 00:12:15.039 lat (usec): min=5783, max=17173, avg=13301.10, stdev=1505.27 00:12:15.039 clat percentiles (usec): 00:12:15.039 | 1.00th=[10028], 5.00th=[10945], 10.00th=[11731], 20.00th=[12387], 00:12:15.039 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13173], 00:12:15.039 | 70.00th=[13435], 80.00th=[13829], 90.00th=[15926], 95.00th=[16450], 00:12:15.039 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:12:15.039 | 99.99th=[17171] 00:12:15.039 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:12:15.039 slat (usec): min=9, max=3585, avg=97.20, stdev=394.11 00:12:15.039 clat (usec): min=6129, max=17229, avg=12799.58, stdev=1612.56 00:12:15.039 lat (usec): min=6149, max=17252, avg=12896.78, stdev=1608.61 00:12:15.039 clat percentiles (usec): 00:12:15.039 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:12:15.039 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:12:15.039 | 70.00th=[13435], 80.00th=[13698], 90.00th=[15139], 95.00th=[16319], 00:12:15.039 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:12:15.039 | 99.99th=[17171] 00:12:15.039 bw ( KiB/s): min=19656, max=20480, per=26.98%, avg=20068.00, stdev=582.66, samples=2 00:12:15.039 iops : min= 4914, max= 5120, avg=5017.00, stdev=145.66, samples=2 00:12:15.039 lat (usec) : 1000=0.01% 00:12:15.039 lat (msec) : 10=1.18%, 20=98.81% 00:12:15.039 cpu : usr=4.99%, sys=13.17%, ctx=546, majf=0, minf=7 00:12:15.039 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:15.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.040 issued rwts: total=4633,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.040 job2: (groupid=0, jobs=1): err= 0: pid=77234: Mon Jul 15 22:07:01 2024 00:12:15.040 read: IOPS=3934, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1007msec) 00:12:15.040 slat (usec): min=4, max=18034, avg=139.17, stdev=933.72 00:12:15.040 clat (usec): min=2844, max=35243, avg=17193.96, stdev=4895.10 00:12:15.040 lat (usec): min=6056, max=35280, avg=17333.12, stdev=4940.40 00:12:15.040 clat percentiles (usec): 00:12:15.040 | 1.00th=[ 6521], 5.00th=[11469], 10.00th=[11863], 20.00th=[12780], 00:12:15.040 | 30.00th=[14353], 40.00th=[14877], 50.00th=[16450], 60.00th=[17695], 00:12:15.040 | 70.00th=[18744], 80.00th=[21103], 90.00th=[23725], 95.00th=[26084], 00:12:15.040 | 99.00th=[32113], 99.50th=[33162], 99.90th=[35390], 99.95th=[35390], 00:12:15.040 | 99.99th=[35390] 00:12:15.040 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:12:15.040 slat (usec): min=4, max=14685, avg=102.84, stdev=482.96 00:12:15.040 clat (usec): min=4862, max=35200, avg=14540.80, stdev=3323.96 00:12:15.040 lat (usec): min=4885, max=35210, avg=14643.63, stdev=3360.98 00:12:15.040 clat percentiles (usec): 00:12:15.040 | 1.00th=[ 5735], 5.00th=[ 7242], 10.00th=[ 9110], 20.00th=[13435], 00:12:15.040 | 30.00th=[14222], 40.00th=[14615], 50.00th=[14877], 60.00th=[15008], 00:12:15.040 | 70.00th=[15401], 80.00th=[17433], 90.00th=[18482], 95.00th=[18744], 00:12:15.040 | 99.00th=[19006], 99.50th=[19268], 99.90th=[29754], 99.95th=[32900], 00:12:15.040 | 99.99th=[35390] 00:12:15.040 bw ( KiB/s): min=15760, max=17008, per=22.03%, avg=16384.00, stdev=882.47, samples=2 00:12:15.040 iops : min= 3940, max= 4252, avg=4096.00, stdev=220.62, samples=2 00:12:15.040 lat (msec) : 4=0.01%, 10=7.63%, 20=80.75%, 50=11.60% 00:12:15.040 cpu : usr=4.27%, sys=8.25%, ctx=574, majf=0, minf=9 00:12:15.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:15.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.040 issued rwts: total=3962,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.040 job3: (groupid=0, jobs=1): err= 0: pid=77235: Mon Jul 15 22:07:01 2024 00:12:15.040 read: IOPS=4288, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1012msec) 00:12:15.040 slat (usec): min=4, max=13656, avg=120.87, stdev=801.08 00:12:15.040 clat (usec): min=5566, max=29184, avg=15373.05, stdev=3909.29 00:12:15.040 lat (usec): min=5578, max=29973, avg=15493.92, stdev=3951.54 00:12:15.040 clat percentiles (usec): 00:12:15.040 | 1.00th=[ 6128], 5.00th=[10683], 10.00th=[11338], 20.00th=[13042], 00:12:15.040 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14222], 60.00th=[15139], 00:12:15.040 | 70.00th=[16319], 80.00th=[17433], 90.00th=[21103], 95.00th=[23987], 00:12:15.040 | 99.00th=[26346], 99.50th=[27132], 99.90th=[28967], 99.95th=[29230], 00:12:15.040 | 99.99th=[29230] 00:12:15.040 write: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec); 0 zone resets 00:12:15.040 slat (usec): min=4, max=12776, avg=95.96, stdev=505.49 00:12:15.040 clat (usec): min=2554, max=28213, avg=13359.38, stdev=2794.59 00:12:15.040 lat (usec): min=2569, max=28221, avg=13455.34, stdev=2842.39 00:12:15.040 clat percentiles (usec): 00:12:15.040 | 1.00th=[ 5276], 5.00th=[ 7177], 10.00th=[ 8979], 20.00th=[12125], 00:12:15.040 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:12:15.040 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15664], 95.00th=[16581], 00:12:15.040 | 99.00th=[17695], 99.50th=[23462], 99.90th=[26346], 99.95th=[27132], 00:12:15.040 | 99.99th=[28181] 00:12:15.040 bw ( KiB/s): min=17240, max=19624, per=24.78%, avg=18432.00, stdev=1685.74, samples=2 00:12:15.040 iops : min= 4310, max= 4906, avg=4608.00, stdev=421.44, samples=2 00:12:15.040 lat (msec) : 4=0.08%, 10=7.66%, 20=85.07%, 50=7.20% 00:12:15.040 cpu : usr=4.06%, sys=9.99%, ctx=593, majf=0, minf=12 00:12:15.040 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:15.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.040 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.040 issued rwts: total=4340,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.040 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.040 00:12:15.040 Run status group 0 (all jobs): 00:12:15.040 READ: bw=67.6MiB/s (70.9MB/s), 15.4MiB/s-18.0MiB/s (16.1MB/s-18.9MB/s), io=68.5MiB (71.9MB), run=1003-1014msec 00:12:15.040 WRITE: bw=72.6MiB/s (76.2MB/s), 15.9MiB/s-19.9MiB/s (16.7MB/s-20.9MB/s), io=73.6MiB (77.2MB), run=1003-1014msec 00:12:15.040 00:12:15.040 Disk stats (read/write): 00:12:15.040 nvme0n1: ios=4146/4318, merge=0/0, ticks=52998/51028, in_queue=104026, util=89.27% 00:12:15.040 nvme0n2: ios=4140/4564, merge=0/0, ticks=12368/12638, in_queue=25006, util=89.86% 00:12:15.040 nvme0n3: ios=3214/3584, merge=0/0, ticks=53900/51146, in_queue=105046, util=89.51% 00:12:15.040 nvme0n4: ios=3584/4039, merge=0/0, ticks=51830/52721, in_queue=104551, util=89.76% 00:12:15.040 22:07:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:15.040 22:07:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77256 00:12:15.040 22:07:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:15.040 22:07:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:15.040 [global] 00:12:15.040 thread=1 00:12:15.040 invalidate=1 00:12:15.040 rw=read 00:12:15.040 time_based=1 00:12:15.040 runtime=10 00:12:15.040 ioengine=libaio 00:12:15.040 direct=1 00:12:15.040 bs=4096 00:12:15.040 iodepth=1 00:12:15.040 norandommap=1 00:12:15.040 numjobs=1 00:12:15.040 00:12:15.040 [job0] 00:12:15.040 filename=/dev/nvme0n1 00:12:15.040 [job1] 00:12:15.040 filename=/dev/nvme0n2 00:12:15.040 [job2] 00:12:15.040 filename=/dev/nvme0n3 00:12:15.040 [job3] 00:12:15.040 filename=/dev/nvme0n4 00:12:15.040 Could not set queue depth (nvme0n1) 00:12:15.040 Could not set queue depth (nvme0n2) 00:12:15.040 Could not set queue depth (nvme0n3) 00:12:15.040 Could not set queue depth (nvme0n4) 00:12:15.040 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.040 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.040 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.040 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.040 fio-3.35 00:12:15.040 Starting 4 threads 00:12:18.322 22:07:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:18.322 fio: pid=77302, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:18.322 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=49827840, buflen=4096 00:12:18.322 22:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:18.580 fio: pid=77301, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:18.580 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=59461632, buflen=4096 00:12:18.581 22:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:18.581 22:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:19.147 fio: pid=77299, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:19.147 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=46473216, buflen=4096 00:12:19.147 22:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:19.147 22:07:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:19.406 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=55734272, buflen=4096 00:12:19.406 fio: pid=77300, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:19.406 00:12:19.406 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77299: Mon Jul 15 22:07:06 2024 00:12:19.406 read: IOPS=2938, BW=11.5MiB/s (12.0MB/s)(44.3MiB/3862msec) 00:12:19.406 slat (usec): min=14, max=10541, avg=30.34, stdev=160.46 00:12:19.406 clat (usec): min=141, max=3330, avg=307.29, stdev=89.50 00:12:19.406 lat (usec): min=159, max=10969, avg=337.63, stdev=185.50 00:12:19.406 clat percentiles (usec): 00:12:19.406 | 1.00th=[ 163], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 265], 00:12:19.406 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:12:19.406 | 70.00th=[ 314], 80.00th=[ 351], 90.00th=[ 408], 95.00th=[ 429], 00:12:19.406 | 99.00th=[ 486], 99.50th=[ 570], 99.90th=[ 1434], 99.95th=[ 1975], 00:12:19.406 | 99.99th=[ 3195] 00:12:19.406 bw ( KiB/s): min= 9760, max=12832, per=24.68%, avg=11851.71, stdev=1104.92, samples=7 00:12:19.406 iops : min= 2440, max= 3208, avg=2962.86, stdev=276.29, samples=7 00:12:19.406 lat (usec) : 250=5.67%, 500=93.43%, 750=0.68%, 1000=0.08% 00:12:19.406 lat (msec) : 2=0.09%, 4=0.04% 00:12:19.406 cpu : usr=1.50%, sys=6.79%, ctx=11353, majf=0, minf=1 00:12:19.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.406 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.406 issued rwts: total=11347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.406 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77300: Mon Jul 15 22:07:06 2024 00:12:19.406 read: IOPS=3163, BW=12.4MiB/s (13.0MB/s)(53.2MiB/4301msec) 00:12:19.406 slat (usec): min=14, max=16765, avg=32.32, stdev=279.90 00:12:19.406 clat (usec): min=132, max=3471, avg=280.90, stdev=102.09 00:12:19.406 lat (usec): min=149, max=17040, avg=313.22, stdev=299.26 00:12:19.406 clat percentiles (usec): 00:12:19.406 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 239], 00:12:19.406 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 285], 00:12:19.406 | 70.00th=[ 302], 80.00th=[ 334], 90.00th=[ 396], 95.00th=[ 424], 00:12:19.406 | 99.00th=[ 478], 99.50th=[ 570], 99.90th=[ 1270], 99.95th=[ 1958], 00:12:19.406 | 99.99th=[ 2606] 00:12:19.406 bw ( KiB/s): min= 9888, max=15639, per=25.55%, avg=12271.00, stdev=1753.71, samples=8 00:12:19.406 iops : min= 2472, max= 3909, avg=3067.62, stdev=438.26, samples=8 00:12:19.406 lat (usec) : 250=24.05%, 500=75.16%, 750=0.50%, 1000=0.14% 00:12:19.406 lat (msec) : 2=0.10%, 4=0.04% 00:12:19.406 cpu : usr=1.58%, sys=7.23%, ctx=13621, majf=0, minf=1 00:12:19.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.406 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.406 issued rwts: total=13608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.406 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77301: Mon Jul 15 22:07:06 2024 00:12:19.406 read: IOPS=4253, BW=16.6MiB/s (17.4MB/s)(56.7MiB/3413msec) 00:12:19.406 slat (usec): min=13, max=15036, avg=22.98, stdev=169.07 00:12:19.406 clat (usec): min=145, max=1639, avg=209.86, stdev=41.70 00:12:19.406 lat (usec): min=161, max=15382, avg=232.84, stdev=176.33 00:12:19.406 clat percentiles (usec): 00:12:19.406 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 174], 00:12:19.406 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 202], 60.00th=[ 221], 00:12:19.406 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 260], 95.00th=[ 269], 00:12:19.406 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 457], 99.95th=[ 545], 00:12:19.406 | 99.99th=[ 1614] 00:12:19.406 bw ( KiB/s): min=14776, max=20144, per=36.34%, avg=17449.33, stdev=2073.57, samples=6 00:12:19.406 iops : min= 3694, max= 5036, avg=4362.33, stdev=518.39, samples=6 00:12:19.406 lat (usec) : 250=83.26%, 500=16.64%, 750=0.06%, 1000=0.01% 00:12:19.406 lat (msec) : 2=0.01% 00:12:19.406 cpu : usr=2.11%, sys=7.62%, ctx=14527, majf=0, minf=1 00:12:19.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.406 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.406 issued rwts: total=14518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.406 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77302: Mon Jul 15 22:07:06 2024 00:12:19.406 read: IOPS=3970, BW=15.5MiB/s (16.3MB/s)(47.5MiB/3064msec) 00:12:19.406 slat (usec): min=13, max=118, avg=19.87, stdev= 6.20 00:12:19.406 clat (usec): min=181, max=2151, avg=230.09, stdev=33.40 00:12:19.406 lat (usec): min=197, max=2176, avg=249.96, stdev=35.24 00:12:19.406 clat percentiles (usec): 00:12:19.406 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:12:19.406 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:12:19.406 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 260], 95.00th=[ 273], 00:12:19.406 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 453], 99.95th=[ 750], 00:12:19.406 | 99.99th=[ 1004] 00:12:19.406 bw ( KiB/s): min=14880, max=16552, per=33.09%, avg=15892.00, stdev=659.91, samples=6 00:12:19.406 iops : min= 3720, max= 4138, avg=3973.00, stdev=164.98, samples=6 00:12:19.406 lat (usec) : 250=82.80%, 500=17.10%, 750=0.04%, 1000=0.03% 00:12:19.406 lat (msec) : 2=0.01%, 4=0.01% 00:12:19.406 cpu : usr=1.40%, sys=6.56%, ctx=12166, majf=0, minf=1 00:12:19.406 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.406 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.406 issued rwts: total=12166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.406 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.406 00:12:19.406 Run status group 0 (all jobs): 00:12:19.406 READ: bw=46.9MiB/s (49.2MB/s), 11.5MiB/s-16.6MiB/s (12.0MB/s-17.4MB/s), io=202MiB (211MB), run=3064-4301msec 00:12:19.406 00:12:19.406 Disk stats (read/write): 00:12:19.406 nvme0n1: ios=11335/0, merge=0/0, ticks=3542/0, in_queue=3542, util=95.69% 00:12:19.406 nvme0n2: ios=12529/0, merge=0/0, ticks=3726/0, in_queue=3726, util=95.18% 00:12:19.406 nvme0n3: ios=14268/0, merge=0/0, ticks=3053/0, in_queue=3053, util=96.16% 00:12:19.406 nvme0n4: ios=11338/0, merge=0/0, ticks=2659/0, in_queue=2659, util=96.65% 00:12:19.406 22:07:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:19.406 22:07:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:19.971 22:07:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:19.971 22:07:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:20.228 22:07:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:20.228 22:07:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:20.486 22:07:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:20.486 22:07:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:20.744 22:07:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:20.744 22:07:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:21.307 22:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:21.307 22:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77256 00:12:21.307 22:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:21.307 22:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.307 22:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.307 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:21.307 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:21.307 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.307 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:21.307 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.307 nvmf hotplug test: fio failed as expected 00:12:21.307 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:21.307 22:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:21.307 22:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:21.307 22:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:21.871 rmmod nvme_tcp 00:12:21.871 rmmod nvme_fabrics 00:12:21.871 rmmod nvme_keyring 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 76767 ']' 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 76767 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 76767 ']' 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 76767 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76767 00:12:21.871 killing process with pid 76767 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76767' 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 76767 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 76767 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.871 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:22.129 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.129 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.129 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.129 22:07:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:22.129 ************************************ 00:12:22.129 END TEST nvmf_fio_target 00:12:22.129 ************************************ 00:12:22.129 00:12:22.129 real 0m21.105s 00:12:22.129 user 1m21.597s 00:12:22.129 sys 0m10.271s 00:12:22.129 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:22.129 22:07:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.129 22:07:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:22.129 22:07:08 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:22.129 22:07:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:22.129 22:07:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.129 22:07:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:22.129 ************************************ 00:12:22.129 START TEST nvmf_bdevio 00:12:22.129 ************************************ 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:22.129 * Looking for test storage... 00:12:22.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:22.129 22:07:08 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.130 22:07:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:22.130 Cannot find device "nvmf_tgt_br" 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:22.130 Cannot find device "nvmf_tgt_br2" 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:22.130 Cannot find device "nvmf_tgt_br" 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:22.130 Cannot find device "nvmf_tgt_br2" 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:12:22.130 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:22.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:22.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:22.387 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:22.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:12:22.645 00:12:22.645 --- 10.0.0.2 ping statistics --- 00:12:22.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.645 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:22.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:22.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:12:22.645 00:12:22.645 --- 10.0.0.3 ping statistics --- 00:12:22.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.645 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:22.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:12:22.645 00:12:22.645 --- 10.0.0.1 ping statistics --- 00:12:22.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.645 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=77631 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 77631 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 77631 ']' 00:12:22.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:22.645 22:07:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:22.645 [2024-07-15 22:07:09.439058] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:12:22.645 [2024-07-15 22:07:09.439186] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.645 [2024-07-15 22:07:09.572731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.902 [2024-07-15 22:07:09.658949] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.902 [2024-07-15 22:07:09.659236] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.902 [2024-07-15 22:07:09.659958] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.902 [2024-07-15 22:07:09.660497] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.902 [2024-07-15 22:07:09.660808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.902 [2024-07-15 22:07:09.661338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:22.902 [2024-07-15 22:07:09.661537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:22.902 [2024-07-15 22:07:09.661594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:22.902 [2024-07-15 22:07:09.661597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.468 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:23.468 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:12:23.468 22:07:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:23.468 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:23.468 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.728 [2024-07-15 22:07:10.455542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.728 Malloc0 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.728 [2024-07-15 22:07:10.519648] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:23.728 { 00:12:23.728 "params": { 00:12:23.728 "name": "Nvme$subsystem", 00:12:23.728 "trtype": "$TEST_TRANSPORT", 00:12:23.728 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:23.728 "adrfam": "ipv4", 00:12:23.728 "trsvcid": "$NVMF_PORT", 00:12:23.728 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:23.728 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:23.728 "hdgst": ${hdgst:-false}, 00:12:23.728 "ddgst": ${ddgst:-false} 00:12:23.728 }, 00:12:23.728 "method": "bdev_nvme_attach_controller" 00:12:23.728 } 00:12:23.728 EOF 00:12:23.728 )") 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:23.728 22:07:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:23.728 "params": { 00:12:23.728 "name": "Nvme1", 00:12:23.728 "trtype": "tcp", 00:12:23.728 "traddr": "10.0.0.2", 00:12:23.728 "adrfam": "ipv4", 00:12:23.728 "trsvcid": "4420", 00:12:23.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:23.728 "hdgst": false, 00:12:23.728 "ddgst": false 00:12:23.728 }, 00:12:23.728 "method": "bdev_nvme_attach_controller" 00:12:23.728 }' 00:12:23.728 [2024-07-15 22:07:10.577101] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:12:23.728 [2024-07-15 22:07:10.577192] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77685 ] 00:12:23.988 [2024-07-15 22:07:10.741417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:23.988 [2024-07-15 22:07:10.888305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.988 [2024-07-15 22:07:10.888445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.988 [2024-07-15 22:07:10.888456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.247 I/O targets: 00:12:24.247 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:24.247 00:12:24.247 00:12:24.247 CUnit - A unit testing framework for C - Version 2.1-3 00:12:24.247 http://cunit.sourceforge.net/ 00:12:24.247 00:12:24.247 00:12:24.247 Suite: bdevio tests on: Nvme1n1 00:12:24.247 Test: blockdev write read block ...passed 00:12:24.247 Test: blockdev write zeroes read block ...passed 00:12:24.247 Test: blockdev write zeroes read no split ...passed 00:12:24.506 Test: blockdev write zeroes read split ...passed 00:12:24.506 Test: blockdev write zeroes read split partial ...passed 00:12:24.506 Test: blockdev reset ...[2024-07-15 22:07:11.209923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:24.506 [2024-07-15 22:07:11.210395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f95180 (9): Bad file descriptor 00:12:24.506 [2024-07-15 22:07:11.226502] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:24.506 passed 00:12:24.506 Test: blockdev write read 8 blocks ...passed 00:12:24.506 Test: blockdev write read size > 128k ...passed 00:12:24.506 Test: blockdev write read invalid size ...passed 00:12:24.506 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.506 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.506 Test: blockdev write read max offset ...passed 00:12:24.506 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.506 Test: blockdev writev readv 8 blocks ...passed 00:12:24.506 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.506 Test: blockdev writev readv block ...passed 00:12:24.506 Test: blockdev writev readv size > 128k ...passed 00:12:24.506 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.506 Test: blockdev comparev and writev ...[2024-07-15 22:07:11.400215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.506 [2024-07-15 22:07:11.400276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:24.506 [2024-07-15 22:07:11.400300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.506 [2024-07-15 22:07:11.400311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:24.506 [2024-07-15 22:07:11.400600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.506 [2024-07-15 22:07:11.400617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:24.506 [2024-07-15 22:07:11.400634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.506 [2024-07-15 22:07:11.400644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:24.506 [2024-07-15 22:07:11.400917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.506 [2024-07-15 22:07:11.400932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:24.506 [2024-07-15 22:07:11.400948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.506 [2024-07-15 22:07:11.400959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:24.506 [2024-07-15 22:07:11.401266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.506 [2024-07-15 22:07:11.401283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:24.507 [2024-07-15 22:07:11.401299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.507 [2024-07-15 22:07:11.401309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:24.507 passed 00:12:24.765 Test: blockdev nvme passthru rw ...passed 00:12:24.765 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.765 Test: blockdev nvme admin passthru ...[2024-07-15 22:07:11.483522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:24.765 [2024-07-15 22:07:11.483585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:24.765 [2024-07-15 22:07:11.483714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:24.765 [2024-07-15 22:07:11.483731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:24.765 [2024-07-15 22:07:11.483875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:24.765 [2024-07-15 22:07:11.483893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:24.765 [2024-07-15 22:07:11.484007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:24.765 [2024-07-15 22:07:11.484022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:24.765 passed 00:12:24.765 Test: blockdev copy ...passed 00:12:24.765 00:12:24.765 Run Summary: Type Total Ran Passed Failed Inactive 00:12:24.765 suites 1 1 n/a 0 0 00:12:24.765 tests 23 23 23 0 0 00:12:24.765 asserts 152 152 152 0 n/a 00:12:24.765 00:12:24.765 Elapsed time = 0.907 seconds 00:12:24.765 22:07:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.765 22:07:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.765 22:07:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:24.765 22:07:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.765 22:07:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:24.765 22:07:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:24.765 22:07:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:24.765 22:07:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:12:25.023 22:07:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:25.024 rmmod nvme_tcp 00:12:25.024 rmmod nvme_fabrics 00:12:25.024 rmmod nvme_keyring 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 77631 ']' 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 77631 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 77631 ']' 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 77631 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77631 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:12:25.024 killing process with pid 77631 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77631' 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 77631 00:12:25.024 22:07:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 77631 00:12:25.282 22:07:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:25.282 22:07:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:25.282 22:07:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:25.282 22:07:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:25.282 22:07:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:25.282 22:07:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.282 22:07:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.282 22:07:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.282 22:07:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:25.282 00:12:25.282 real 0m3.153s 00:12:25.282 user 0m11.417s 00:12:25.282 sys 0m0.745s 00:12:25.282 ************************************ 00:12:25.282 END TEST nvmf_bdevio 00:12:25.282 ************************************ 00:12:25.282 22:07:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:25.283 22:07:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:25.283 22:07:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:25.283 22:07:12 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:25.283 22:07:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:25.283 22:07:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.283 22:07:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:25.283 ************************************ 00:12:25.283 START TEST nvmf_auth_target 00:12:25.283 ************************************ 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:25.283 * Looking for test storage... 00:12:25.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:25.283 Cannot find device "nvmf_tgt_br" 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:12:25.283 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:25.542 Cannot find device "nvmf_tgt_br2" 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:25.542 Cannot find device "nvmf_tgt_br" 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:25.542 Cannot find device "nvmf_tgt_br2" 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:25.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:25.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:25.542 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:25.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:12:25.801 00:12:25.801 --- 10.0.0.2 ping statistics --- 00:12:25.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.801 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:25.801 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:25.801 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:12:25.801 00:12:25.801 --- 10.0.0.3 ping statistics --- 00:12:25.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.801 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:25.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:12:25.801 00:12:25.801 --- 10.0.0.1 ping statistics --- 00:12:25.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.801 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=77867 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 77867 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77867 ']' 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:25.801 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=77898 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d229ade956402527ce7b4a445744d727de07510032fbe8a4 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.89R 00:12:26.063 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d229ade956402527ce7b4a445744d727de07510032fbe8a4 0 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d229ade956402527ce7b4a445744d727de07510032fbe8a4 0 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d229ade956402527ce7b4a445744d727de07510032fbe8a4 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.89R 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.89R 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.89R 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cd9c62f5ad4bf7afe3baefa1cd34860c874c799cd9081d706d2082d6f3b582fb 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.MDs 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cd9c62f5ad4bf7afe3baefa1cd34860c874c799cd9081d706d2082d6f3b582fb 3 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cd9c62f5ad4bf7afe3baefa1cd34860c874c799cd9081d706d2082d6f3b582fb 3 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cd9c62f5ad4bf7afe3baefa1cd34860c874c799cd9081d706d2082d6f3b582fb 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:12:26.064 22:07:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.MDs 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.MDs 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.MDs 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=713ecb8d81202e2bdf1a4c0e31dbe029 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.2XG 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 713ecb8d81202e2bdf1a4c0e31dbe029 1 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 713ecb8d81202e2bdf1a4c0e31dbe029 1 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=713ecb8d81202e2bdf1a4c0e31dbe029 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:12:26.359 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.2XG 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.2XG 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.2XG 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4c618cc6efe15eaa99b024693da03799ecf116b19b9bc27f 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.84J 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4c618cc6efe15eaa99b024693da03799ecf116b19b9bc27f 2 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4c618cc6efe15eaa99b024693da03799ecf116b19b9bc27f 2 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4c618cc6efe15eaa99b024693da03799ecf116b19b9bc27f 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.84J 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.84J 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.84J 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e0ffa1fbfb23f04c5449583ad370a5659028fba0870200be 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qWR 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e0ffa1fbfb23f04c5449583ad370a5659028fba0870200be 2 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e0ffa1fbfb23f04c5449583ad370a5659028fba0870200be 2 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e0ffa1fbfb23f04c5449583ad370a5659028fba0870200be 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qWR 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qWR 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.qWR 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1890104f5996a70d4c409f2c0e16b9e3 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BLL 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1890104f5996a70d4c409f2c0e16b9e3 1 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1890104f5996a70d4c409f2c0e16b9e3 1 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1890104f5996a70d4c409f2c0e16b9e3 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:12:26.360 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BLL 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BLL 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.BLL 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e17e2052fa5d4e6561736a6aa19a68c74f81afc8d96b2b2e5c3a823f636e4bdd 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.inK 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e17e2052fa5d4e6561736a6aa19a68c74f81afc8d96b2b2e5c3a823f636e4bdd 3 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e17e2052fa5d4e6561736a6aa19a68c74f81afc8d96b2b2e5c3a823f636e4bdd 3 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e17e2052fa5d4e6561736a6aa19a68c74f81afc8d96b2b2e5c3a823f636e4bdd 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.inK 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.inK 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.inK 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 77867 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77867 ']' 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.619 22:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.877 22:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.877 22:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:26.877 22:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 77898 /var/tmp/host.sock 00:12:26.877 22:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77898 ']' 00:12:26.877 22:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:26.877 22:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.877 22:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:26.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:26.877 22:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.877 22:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.136 22:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.136 22:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:27.136 22:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:12:27.136 22:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.136 22:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.393 22:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.393 22:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:27.393 22:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.89R 00:12:27.393 22:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.393 22:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.393 22:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.393 22:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.89R 00:12:27.393 22:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.89R 00:12:27.651 22:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.MDs ]] 00:12:27.651 22:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MDs 00:12:27.651 22:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.651 22:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.651 22:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.651 22:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MDs 00:12:27.651 22:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MDs 00:12:27.908 22:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:27.908 22:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.2XG 00:12:27.908 22:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.908 22:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.908 22:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.908 22:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.2XG 00:12:27.908 22:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.2XG 00:12:28.166 22:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.84J ]] 00:12:28.166 22:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.84J 00:12:28.166 22:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.166 22:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.166 22:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.166 22:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.84J 00:12:28.166 22:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.84J 00:12:28.425 22:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:28.425 22:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qWR 00:12:28.425 22:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.425 22:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.425 22:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.425 22:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.qWR 00:12:28.425 22:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.qWR 00:12:28.991 22:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.BLL ]] 00:12:28.991 22:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BLL 00:12:28.991 22:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.991 22:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.991 22:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.991 22:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BLL 00:12:28.991 22:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BLL 00:12:29.248 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:29.248 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.inK 00:12:29.248 22:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.248 22:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.248 22:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.248 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.inK 00:12:29.248 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.inK 00:12:29.506 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:12:29.506 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:29.506 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:29.506 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.506 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:29.506 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:29.764 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:12:29.764 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.764 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:29.764 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:29.764 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:29.764 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.764 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.764 22:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.764 22:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.764 22:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.764 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.764 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.329 00:12:30.329 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.329 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.329 22:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.329 22:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.329 22:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.329 22:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.329 22:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.329 22:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.329 22:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.329 { 00:12:30.329 "auth": { 00:12:30.329 "dhgroup": "null", 00:12:30.329 "digest": "sha256", 00:12:30.329 "state": "completed" 00:12:30.329 }, 00:12:30.329 "cntlid": 1, 00:12:30.329 "listen_address": { 00:12:30.329 "adrfam": "IPv4", 00:12:30.329 "traddr": "10.0.0.2", 00:12:30.329 "trsvcid": "4420", 00:12:30.329 "trtype": "TCP" 00:12:30.329 }, 00:12:30.329 "peer_address": { 00:12:30.329 "adrfam": "IPv4", 00:12:30.329 "traddr": "10.0.0.1", 00:12:30.329 "trsvcid": "34252", 00:12:30.329 "trtype": "TCP" 00:12:30.329 }, 00:12:30.329 "qid": 0, 00:12:30.329 "state": "enabled", 00:12:30.329 "thread": "nvmf_tgt_poll_group_000" 00:12:30.329 } 00:12:30.329 ]' 00:12:30.329 22:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.588 22:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:30.588 22:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.588 22:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:30.588 22:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.588 22:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.588 22:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.588 22:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.847 22:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:12:36.110 22:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.110 22:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:36.110 22:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.110 22:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.110 22:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.110 22:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.110 22:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:36.110 22:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:36.110 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:12:36.110 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.110 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:36.110 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:36.110 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:36.110 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.110 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.110 22:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.110 22:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.110 22:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.110 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.110 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.675 00:12:36.675 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.675 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.675 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.933 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.933 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.933 22:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.933 22:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.933 22:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.933 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.933 { 00:12:36.933 "auth": { 00:12:36.933 "dhgroup": "null", 00:12:36.933 "digest": "sha256", 00:12:36.933 "state": "completed" 00:12:36.933 }, 00:12:36.933 "cntlid": 3, 00:12:36.933 "listen_address": { 00:12:36.933 "adrfam": "IPv4", 00:12:36.933 "traddr": "10.0.0.2", 00:12:36.933 "trsvcid": "4420", 00:12:36.933 "trtype": "TCP" 00:12:36.933 }, 00:12:36.933 "peer_address": { 00:12:36.933 "adrfam": "IPv4", 00:12:36.933 "traddr": "10.0.0.1", 00:12:36.933 "trsvcid": "59042", 00:12:36.933 "trtype": "TCP" 00:12:36.933 }, 00:12:36.933 "qid": 0, 00:12:36.933 "state": "enabled", 00:12:36.933 "thread": "nvmf_tgt_poll_group_000" 00:12:36.933 } 00:12:36.933 ]' 00:12:36.933 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.933 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:36.933 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.933 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:36.933 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.933 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.933 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.933 22:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.190 22:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:12:38.208 22:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.208 22:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:38.208 22:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.208 22:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.208 22:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.208 22:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.208 22:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:38.208 22:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:38.466 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:12:38.466 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.466 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:38.466 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:38.466 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:38.466 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.466 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.466 22:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.466 22:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.466 22:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.466 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.466 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.724 00:12:38.724 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.724 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.724 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.982 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.982 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.982 22:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.982 22:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.982 22:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.982 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.982 { 00:12:38.982 "auth": { 00:12:38.982 "dhgroup": "null", 00:12:38.982 "digest": "sha256", 00:12:38.982 "state": "completed" 00:12:38.982 }, 00:12:38.982 "cntlid": 5, 00:12:38.982 "listen_address": { 00:12:38.982 "adrfam": "IPv4", 00:12:38.982 "traddr": "10.0.0.2", 00:12:38.982 "trsvcid": "4420", 00:12:38.982 "trtype": "TCP" 00:12:38.982 }, 00:12:38.982 "peer_address": { 00:12:38.982 "adrfam": "IPv4", 00:12:38.982 "traddr": "10.0.0.1", 00:12:38.982 "trsvcid": "59062", 00:12:38.982 "trtype": "TCP" 00:12:38.982 }, 00:12:38.982 "qid": 0, 00:12:38.982 "state": "enabled", 00:12:38.982 "thread": "nvmf_tgt_poll_group_000" 00:12:38.982 } 00:12:38.982 ]' 00:12:38.982 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.240 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:39.240 22:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.240 22:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:39.240 22:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.240 22:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.240 22:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.240 22:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.498 22:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:12:40.431 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.431 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:40.431 22:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.431 22:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.431 22:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.431 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:40.431 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:40.431 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:40.689 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:12:40.689 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:40.689 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:40.689 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:40.689 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:40.689 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.689 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:12:40.689 22:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.689 22:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.689 22:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.689 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:40.689 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:40.947 00:12:40.947 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.947 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.947 22:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.206 22:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.206 22:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.206 22:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.206 22:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.206 22:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.206 22:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:41.206 { 00:12:41.206 "auth": { 00:12:41.206 "dhgroup": "null", 00:12:41.206 "digest": "sha256", 00:12:41.206 "state": "completed" 00:12:41.206 }, 00:12:41.206 "cntlid": 7, 00:12:41.206 "listen_address": { 00:12:41.206 "adrfam": "IPv4", 00:12:41.206 "traddr": "10.0.0.2", 00:12:41.206 "trsvcid": "4420", 00:12:41.206 "trtype": "TCP" 00:12:41.206 }, 00:12:41.206 "peer_address": { 00:12:41.206 "adrfam": "IPv4", 00:12:41.206 "traddr": "10.0.0.1", 00:12:41.206 "trsvcid": "59094", 00:12:41.206 "trtype": "TCP" 00:12:41.206 }, 00:12:41.206 "qid": 0, 00:12:41.206 "state": "enabled", 00:12:41.206 "thread": "nvmf_tgt_poll_group_000" 00:12:41.206 } 00:12:41.206 ]' 00:12:41.206 22:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.206 22:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:41.206 22:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:41.464 22:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:41.464 22:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:41.464 22:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.464 22:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.464 22:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.722 22:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:12:42.657 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.657 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:42.657 22:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.657 22:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.657 22:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.657 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:42.657 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.657 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:42.657 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:42.916 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:12:42.916 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.916 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:42.916 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:42.916 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:42.916 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.916 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.916 22:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.916 22:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.916 22:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.916 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.916 22:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.214 00:12:43.214 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.214 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.214 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.471 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.471 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.471 22:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.471 22:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.471 22:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.471 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.471 { 00:12:43.471 "auth": { 00:12:43.471 "dhgroup": "ffdhe2048", 00:12:43.471 "digest": "sha256", 00:12:43.471 "state": "completed" 00:12:43.471 }, 00:12:43.471 "cntlid": 9, 00:12:43.471 "listen_address": { 00:12:43.471 "adrfam": "IPv4", 00:12:43.471 "traddr": "10.0.0.2", 00:12:43.471 "trsvcid": "4420", 00:12:43.471 "trtype": "TCP" 00:12:43.471 }, 00:12:43.471 "peer_address": { 00:12:43.471 "adrfam": "IPv4", 00:12:43.471 "traddr": "10.0.0.1", 00:12:43.471 "trsvcid": "59118", 00:12:43.471 "trtype": "TCP" 00:12:43.471 }, 00:12:43.471 "qid": 0, 00:12:43.471 "state": "enabled", 00:12:43.471 "thread": "nvmf_tgt_poll_group_000" 00:12:43.471 } 00:12:43.471 ]' 00:12:43.471 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.729 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:43.729 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.729 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:43.729 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.729 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.729 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.729 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.987 22:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:12:44.922 22:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.922 22:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:44.922 22:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.922 22:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.922 22:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.922 22:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.922 22:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:44.922 22:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:45.181 22:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:12:45.181 22:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:45.181 22:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:45.181 22:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:45.181 22:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:45.181 22:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.181 22:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.181 22:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.181 22:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.181 22:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.181 22:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.181 22:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.439 00:12:45.439 22:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.439 22:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.439 22:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.006 22:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.006 22:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.006 22:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.006 22:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.006 22:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.006 22:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:46.006 { 00:12:46.006 "auth": { 00:12:46.006 "dhgroup": "ffdhe2048", 00:12:46.006 "digest": "sha256", 00:12:46.006 "state": "completed" 00:12:46.006 }, 00:12:46.006 "cntlid": 11, 00:12:46.006 "listen_address": { 00:12:46.006 "adrfam": "IPv4", 00:12:46.006 "traddr": "10.0.0.2", 00:12:46.006 "trsvcid": "4420", 00:12:46.006 "trtype": "TCP" 00:12:46.006 }, 00:12:46.006 "peer_address": { 00:12:46.006 "adrfam": "IPv4", 00:12:46.006 "traddr": "10.0.0.1", 00:12:46.006 "trsvcid": "59308", 00:12:46.006 "trtype": "TCP" 00:12:46.006 }, 00:12:46.006 "qid": 0, 00:12:46.006 "state": "enabled", 00:12:46.006 "thread": "nvmf_tgt_poll_group_000" 00:12:46.006 } 00:12:46.006 ]' 00:12:46.006 22:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:46.006 22:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:46.006 22:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:46.006 22:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:46.006 22:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:46.006 22:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.006 22:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.006 22:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.264 22:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:12:47.194 22:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.194 22:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:47.194 22:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.194 22:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.194 22:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.194 22:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:47.194 22:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:47.194 22:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:47.452 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:12:47.452 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:47.452 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:47.452 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:47.452 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:47.452 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.452 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.452 22:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.452 22:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.452 22:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.452 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.452 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.711 00:12:47.711 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.711 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.711 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.278 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.278 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.278 22:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.278 22:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.278 22:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.278 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.278 { 00:12:48.278 "auth": { 00:12:48.278 "dhgroup": "ffdhe2048", 00:12:48.278 "digest": "sha256", 00:12:48.278 "state": "completed" 00:12:48.278 }, 00:12:48.278 "cntlid": 13, 00:12:48.278 "listen_address": { 00:12:48.278 "adrfam": "IPv4", 00:12:48.278 "traddr": "10.0.0.2", 00:12:48.278 "trsvcid": "4420", 00:12:48.278 "trtype": "TCP" 00:12:48.278 }, 00:12:48.278 "peer_address": { 00:12:48.278 "adrfam": "IPv4", 00:12:48.278 "traddr": "10.0.0.1", 00:12:48.278 "trsvcid": "59344", 00:12:48.278 "trtype": "TCP" 00:12:48.278 }, 00:12:48.278 "qid": 0, 00:12:48.278 "state": "enabled", 00:12:48.278 "thread": "nvmf_tgt_poll_group_000" 00:12:48.278 } 00:12:48.278 ]' 00:12:48.278 22:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.278 22:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:48.278 22:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.278 22:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:48.278 22:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.278 22:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.278 22:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.278 22:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.537 22:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:49.475 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:50.042 00:12:50.042 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.042 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.042 22:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.300 22:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.300 22:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.300 22:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.300 22:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.300 22:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.300 22:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.300 { 00:12:50.300 "auth": { 00:12:50.300 "dhgroup": "ffdhe2048", 00:12:50.300 "digest": "sha256", 00:12:50.300 "state": "completed" 00:12:50.300 }, 00:12:50.300 "cntlid": 15, 00:12:50.300 "listen_address": { 00:12:50.300 "adrfam": "IPv4", 00:12:50.300 "traddr": "10.0.0.2", 00:12:50.300 "trsvcid": "4420", 00:12:50.300 "trtype": "TCP" 00:12:50.300 }, 00:12:50.300 "peer_address": { 00:12:50.300 "adrfam": "IPv4", 00:12:50.300 "traddr": "10.0.0.1", 00:12:50.300 "trsvcid": "59382", 00:12:50.300 "trtype": "TCP" 00:12:50.300 }, 00:12:50.300 "qid": 0, 00:12:50.300 "state": "enabled", 00:12:50.300 "thread": "nvmf_tgt_poll_group_000" 00:12:50.300 } 00:12:50.300 ]' 00:12:50.300 22:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:50.300 22:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:50.300 22:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:50.300 22:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:50.300 22:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:50.300 22:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.300 22:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.300 22:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.558 22:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:12:51.491 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.491 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:51.491 22:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.491 22:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.491 22:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.491 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:51.491 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:51.491 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:51.492 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:52.058 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:12:52.058 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:52.058 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:52.058 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:52.058 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:52.058 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.058 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.058 22:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.058 22:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.058 22:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.058 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.058 22:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.315 00:12:52.315 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:52.315 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:52.315 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.573 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.573 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.573 22:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.573 22:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.573 22:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.573 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:52.573 { 00:12:52.573 "auth": { 00:12:52.573 "dhgroup": "ffdhe3072", 00:12:52.573 "digest": "sha256", 00:12:52.573 "state": "completed" 00:12:52.573 }, 00:12:52.573 "cntlid": 17, 00:12:52.573 "listen_address": { 00:12:52.573 "adrfam": "IPv4", 00:12:52.573 "traddr": "10.0.0.2", 00:12:52.573 "trsvcid": "4420", 00:12:52.573 "trtype": "TCP" 00:12:52.573 }, 00:12:52.573 "peer_address": { 00:12:52.573 "adrfam": "IPv4", 00:12:52.573 "traddr": "10.0.0.1", 00:12:52.573 "trsvcid": "59396", 00:12:52.574 "trtype": "TCP" 00:12:52.574 }, 00:12:52.574 "qid": 0, 00:12:52.574 "state": "enabled", 00:12:52.574 "thread": "nvmf_tgt_poll_group_000" 00:12:52.574 } 00:12:52.574 ]' 00:12:52.574 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:52.574 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:52.574 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.574 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:52.832 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.832 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.832 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.832 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.091 22:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.026 22:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.593 00:12:54.593 22:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:54.593 22:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.593 22:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:54.861 22:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.861 22:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.861 22:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.861 22:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.861 22:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.861 22:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:54.861 { 00:12:54.861 "auth": { 00:12:54.861 "dhgroup": "ffdhe3072", 00:12:54.861 "digest": "sha256", 00:12:54.861 "state": "completed" 00:12:54.861 }, 00:12:54.861 "cntlid": 19, 00:12:54.861 "listen_address": { 00:12:54.861 "adrfam": "IPv4", 00:12:54.861 "traddr": "10.0.0.2", 00:12:54.861 "trsvcid": "4420", 00:12:54.861 "trtype": "TCP" 00:12:54.861 }, 00:12:54.861 "peer_address": { 00:12:54.861 "adrfam": "IPv4", 00:12:54.861 "traddr": "10.0.0.1", 00:12:54.861 "trsvcid": "52380", 00:12:54.861 "trtype": "TCP" 00:12:54.861 }, 00:12:54.861 "qid": 0, 00:12:54.861 "state": "enabled", 00:12:54.861 "thread": "nvmf_tgt_poll_group_000" 00:12:54.861 } 00:12:54.861 ]' 00:12:54.861 22:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:54.861 22:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:54.861 22:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:54.861 22:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:54.861 22:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.124 22:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.124 22:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.124 22:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.389 22:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:12:55.955 22:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.955 22:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:55.955 22:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.955 22:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.955 22:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.955 22:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:55.955 22:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:55.955 22:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:56.214 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:12:56.214 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:56.214 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:56.214 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:56.214 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:56.214 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.214 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.214 22:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.214 22:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.214 22:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.214 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.214 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:56.779 00:12:56.779 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.779 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.779 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.037 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.038 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.038 22:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.038 22:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.038 22:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.038 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:57.038 { 00:12:57.038 "auth": { 00:12:57.038 "dhgroup": "ffdhe3072", 00:12:57.038 "digest": "sha256", 00:12:57.038 "state": "completed" 00:12:57.038 }, 00:12:57.038 "cntlid": 21, 00:12:57.038 "listen_address": { 00:12:57.038 "adrfam": "IPv4", 00:12:57.038 "traddr": "10.0.0.2", 00:12:57.038 "trsvcid": "4420", 00:12:57.038 "trtype": "TCP" 00:12:57.038 }, 00:12:57.038 "peer_address": { 00:12:57.038 "adrfam": "IPv4", 00:12:57.038 "traddr": "10.0.0.1", 00:12:57.038 "trsvcid": "52410", 00:12:57.038 "trtype": "TCP" 00:12:57.038 }, 00:12:57.038 "qid": 0, 00:12:57.038 "state": "enabled", 00:12:57.038 "thread": "nvmf_tgt_poll_group_000" 00:12:57.038 } 00:12:57.038 ]' 00:12:57.038 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:57.038 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:57.038 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:57.038 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:57.038 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:57.295 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.295 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.295 22:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.554 22:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:12:58.120 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.120 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:12:58.120 22:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.120 22:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.120 22:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.120 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:58.120 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:58.120 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:58.686 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:12:58.686 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:58.686 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:58.686 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:58.686 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:58.686 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.686 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:12:58.686 22:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.686 22:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.686 22:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.686 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:58.686 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:58.943 00:12:58.943 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.943 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.943 22:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:59.201 22:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.201 22:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.201 22:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.201 22:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.201 22:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.201 22:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:59.201 { 00:12:59.201 "auth": { 00:12:59.201 "dhgroup": "ffdhe3072", 00:12:59.201 "digest": "sha256", 00:12:59.201 "state": "completed" 00:12:59.201 }, 00:12:59.201 "cntlid": 23, 00:12:59.201 "listen_address": { 00:12:59.201 "adrfam": "IPv4", 00:12:59.201 "traddr": "10.0.0.2", 00:12:59.201 "trsvcid": "4420", 00:12:59.201 "trtype": "TCP" 00:12:59.201 }, 00:12:59.201 "peer_address": { 00:12:59.201 "adrfam": "IPv4", 00:12:59.201 "traddr": "10.0.0.1", 00:12:59.201 "trsvcid": "52438", 00:12:59.201 "trtype": "TCP" 00:12:59.201 }, 00:12:59.201 "qid": 0, 00:12:59.201 "state": "enabled", 00:12:59.201 "thread": "nvmf_tgt_poll_group_000" 00:12:59.201 } 00:12:59.201 ]' 00:12:59.201 22:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:59.459 22:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:59.459 22:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:59.459 22:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:59.459 22:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:59.459 22:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.459 22:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.459 22:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.717 22:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:13:00.650 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.650 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:00.650 22:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.650 22:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.650 22:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.650 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:00.650 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.650 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:00.650 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:00.908 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:13:00.908 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.908 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:00.908 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:00.908 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:00.908 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.908 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.908 22:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.908 22:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.908 22:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.908 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.908 22:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.166 00:13:01.166 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.166 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.166 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.424 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.424 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.424 22:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.424 22:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.424 22:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.424 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.424 { 00:13:01.424 "auth": { 00:13:01.424 "dhgroup": "ffdhe4096", 00:13:01.424 "digest": "sha256", 00:13:01.424 "state": "completed" 00:13:01.424 }, 00:13:01.424 "cntlid": 25, 00:13:01.424 "listen_address": { 00:13:01.424 "adrfam": "IPv4", 00:13:01.424 "traddr": "10.0.0.2", 00:13:01.424 "trsvcid": "4420", 00:13:01.424 "trtype": "TCP" 00:13:01.424 }, 00:13:01.424 "peer_address": { 00:13:01.424 "adrfam": "IPv4", 00:13:01.424 "traddr": "10.0.0.1", 00:13:01.424 "trsvcid": "52466", 00:13:01.424 "trtype": "TCP" 00:13:01.424 }, 00:13:01.424 "qid": 0, 00:13:01.424 "state": "enabled", 00:13:01.424 "thread": "nvmf_tgt_poll_group_000" 00:13:01.424 } 00:13:01.424 ]' 00:13:01.424 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.681 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:01.681 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.681 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:01.681 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.681 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.681 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.681 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.938 22:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:13:02.871 22:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.871 22:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:02.871 22:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.871 22:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.871 22:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.871 22:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.871 22:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:02.871 22:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:03.130 22:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:13:03.130 22:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:03.130 22:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:03.130 22:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:03.130 22:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:03.131 22:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.131 22:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.131 22:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.131 22:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.131 22:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.131 22:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.131 22:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.388 00:13:03.388 22:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.388 22:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.388 22:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.972 22:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.972 22:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.972 22:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.972 22:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.972 22:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.972 22:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.972 { 00:13:03.972 "auth": { 00:13:03.972 "dhgroup": "ffdhe4096", 00:13:03.972 "digest": "sha256", 00:13:03.972 "state": "completed" 00:13:03.972 }, 00:13:03.972 "cntlid": 27, 00:13:03.972 "listen_address": { 00:13:03.972 "adrfam": "IPv4", 00:13:03.972 "traddr": "10.0.0.2", 00:13:03.972 "trsvcid": "4420", 00:13:03.972 "trtype": "TCP" 00:13:03.972 }, 00:13:03.972 "peer_address": { 00:13:03.972 "adrfam": "IPv4", 00:13:03.972 "traddr": "10.0.0.1", 00:13:03.972 "trsvcid": "52500", 00:13:03.972 "trtype": "TCP" 00:13:03.972 }, 00:13:03.972 "qid": 0, 00:13:03.972 "state": "enabled", 00:13:03.972 "thread": "nvmf_tgt_poll_group_000" 00:13:03.972 } 00:13:03.972 ]' 00:13:03.972 22:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.972 22:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:03.972 22:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.972 22:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:03.972 22:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.972 22:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.972 22:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.972 22:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.230 22:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:13:05.164 22:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.164 22:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:05.164 22:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.164 22:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.164 22:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.164 22:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:05.164 22:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:05.164 22:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:05.423 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:13:05.423 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:05.423 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:05.423 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:05.423 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:05.423 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.423 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.423 22:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.423 22:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.423 22:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.423 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.423 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.680 00:13:05.680 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.680 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.680 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.938 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.938 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.938 22:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.938 22:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.938 22:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.938 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.938 { 00:13:05.938 "auth": { 00:13:05.938 "dhgroup": "ffdhe4096", 00:13:05.938 "digest": "sha256", 00:13:05.938 "state": "completed" 00:13:05.938 }, 00:13:05.938 "cntlid": 29, 00:13:05.938 "listen_address": { 00:13:05.938 "adrfam": "IPv4", 00:13:05.938 "traddr": "10.0.0.2", 00:13:05.938 "trsvcid": "4420", 00:13:05.938 "trtype": "TCP" 00:13:05.938 }, 00:13:05.938 "peer_address": { 00:13:05.938 "adrfam": "IPv4", 00:13:05.938 "traddr": "10.0.0.1", 00:13:05.938 "trsvcid": "49052", 00:13:05.938 "trtype": "TCP" 00:13:05.938 }, 00:13:05.938 "qid": 0, 00:13:05.938 "state": "enabled", 00:13:05.938 "thread": "nvmf_tgt_poll_group_000" 00:13:05.938 } 00:13:05.938 ]' 00:13:06.195 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:06.195 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:06.195 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:06.195 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:06.195 22:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:06.195 22:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.195 22:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.195 22:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.452 22:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:13:07.383 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.383 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:07.383 22:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.383 22:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.384 22:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.384 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:07.384 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:07.384 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:07.647 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:13:07.647 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:07.647 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:07.647 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:07.647 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:07.647 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.647 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:13:07.647 22:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.647 22:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.647 22:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.647 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:07.647 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:07.906 00:13:07.906 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:07.906 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:07.906 22:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.165 22:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.165 22:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.165 22:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.165 22:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.165 22:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.165 22:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:08.165 { 00:13:08.165 "auth": { 00:13:08.165 "dhgroup": "ffdhe4096", 00:13:08.165 "digest": "sha256", 00:13:08.165 "state": "completed" 00:13:08.165 }, 00:13:08.165 "cntlid": 31, 00:13:08.165 "listen_address": { 00:13:08.165 "adrfam": "IPv4", 00:13:08.165 "traddr": "10.0.0.2", 00:13:08.165 "trsvcid": "4420", 00:13:08.165 "trtype": "TCP" 00:13:08.165 }, 00:13:08.165 "peer_address": { 00:13:08.165 "adrfam": "IPv4", 00:13:08.165 "traddr": "10.0.0.1", 00:13:08.165 "trsvcid": "49084", 00:13:08.165 "trtype": "TCP" 00:13:08.165 }, 00:13:08.165 "qid": 0, 00:13:08.165 "state": "enabled", 00:13:08.165 "thread": "nvmf_tgt_poll_group_000" 00:13:08.165 } 00:13:08.165 ]' 00:13:08.165 22:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:08.424 22:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:08.424 22:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:08.424 22:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:08.424 22:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:08.424 22:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.424 22:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.424 22:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.683 22:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:13:09.614 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.614 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:09.614 22:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.614 22:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.614 22:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.614 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:09.614 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:09.614 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:09.614 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:09.872 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:13:09.872 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.872 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:09.872 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:09.872 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:09.872 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.872 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.872 22:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.872 22:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.872 22:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.872 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.872 22:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.130 00:13:10.388 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:10.388 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:10.388 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.647 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.647 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.647 22:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.647 22:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.647 22:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.647 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:10.647 { 00:13:10.647 "auth": { 00:13:10.647 "dhgroup": "ffdhe6144", 00:13:10.647 "digest": "sha256", 00:13:10.647 "state": "completed" 00:13:10.647 }, 00:13:10.647 "cntlid": 33, 00:13:10.647 "listen_address": { 00:13:10.647 "adrfam": "IPv4", 00:13:10.647 "traddr": "10.0.0.2", 00:13:10.647 "trsvcid": "4420", 00:13:10.647 "trtype": "TCP" 00:13:10.647 }, 00:13:10.647 "peer_address": { 00:13:10.647 "adrfam": "IPv4", 00:13:10.647 "traddr": "10.0.0.1", 00:13:10.647 "trsvcid": "49120", 00:13:10.647 "trtype": "TCP" 00:13:10.647 }, 00:13:10.647 "qid": 0, 00:13:10.647 "state": "enabled", 00:13:10.647 "thread": "nvmf_tgt_poll_group_000" 00:13:10.647 } 00:13:10.647 ]' 00:13:10.647 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:10.647 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:10.647 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:10.647 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:10.647 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:10.647 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.647 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.647 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.212 22:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:13:11.779 22:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.779 22:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:11.779 22:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.779 22:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.779 22:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.779 22:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:11.779 22:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:11.779 22:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:12.039 22:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:13:12.039 22:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.039 22:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:12.039 22:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:12.039 22:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:12.039 22:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.039 22:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.039 22:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.039 22:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.039 22:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.039 22:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.039 22:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.605 00:13:12.605 22:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:12.605 22:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.605 22:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.863 22:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.863 22:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.863 22:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.863 22:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.863 22:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.863 22:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:12.863 { 00:13:12.863 "auth": { 00:13:12.863 "dhgroup": "ffdhe6144", 00:13:12.863 "digest": "sha256", 00:13:12.863 "state": "completed" 00:13:12.863 }, 00:13:12.863 "cntlid": 35, 00:13:12.863 "listen_address": { 00:13:12.863 "adrfam": "IPv4", 00:13:12.863 "traddr": "10.0.0.2", 00:13:12.863 "trsvcid": "4420", 00:13:12.863 "trtype": "TCP" 00:13:12.863 }, 00:13:12.863 "peer_address": { 00:13:12.863 "adrfam": "IPv4", 00:13:12.863 "traddr": "10.0.0.1", 00:13:12.863 "trsvcid": "49158", 00:13:12.863 "trtype": "TCP" 00:13:12.863 }, 00:13:12.863 "qid": 0, 00:13:12.863 "state": "enabled", 00:13:12.863 "thread": "nvmf_tgt_poll_group_000" 00:13:12.863 } 00:13:12.863 ]' 00:13:12.863 22:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:12.863 22:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:12.863 22:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.122 22:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:13.122 22:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.122 22:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.122 22:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.122 22:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.380 22:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:13:14.344 22:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.344 22:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:14.344 22:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.344 22:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.344 22:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.344 22:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.344 22:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:14.344 22:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:14.344 22:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:13:14.344 22:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.344 22:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:14.344 22:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:14.344 22:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:14.344 22:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.344 22:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.344 22:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.344 22:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.344 22:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.344 22:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.344 22:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.908 00:13:14.908 22:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:14.908 22:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:14.908 22:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.166 22:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.166 22:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.166 22:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.166 22:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.166 22:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.166 22:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.166 { 00:13:15.166 "auth": { 00:13:15.166 "dhgroup": "ffdhe6144", 00:13:15.166 "digest": "sha256", 00:13:15.166 "state": "completed" 00:13:15.166 }, 00:13:15.166 "cntlid": 37, 00:13:15.166 "listen_address": { 00:13:15.166 "adrfam": "IPv4", 00:13:15.166 "traddr": "10.0.0.2", 00:13:15.166 "trsvcid": "4420", 00:13:15.166 "trtype": "TCP" 00:13:15.166 }, 00:13:15.166 "peer_address": { 00:13:15.166 "adrfam": "IPv4", 00:13:15.166 "traddr": "10.0.0.1", 00:13:15.166 "trsvcid": "39374", 00:13:15.166 "trtype": "TCP" 00:13:15.166 }, 00:13:15.166 "qid": 0, 00:13:15.166 "state": "enabled", 00:13:15.166 "thread": "nvmf_tgt_poll_group_000" 00:13:15.166 } 00:13:15.166 ]' 00:13:15.166 22:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.423 22:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:15.423 22:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.423 22:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:15.423 22:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.423 22:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.423 22:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.423 22:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.680 22:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:13:16.636 22:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.636 22:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:16.636 22:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.636 22:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.636 22:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.636 22:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.636 22:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:16.636 22:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:16.972 22:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:13:16.972 22:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.972 22:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:16.972 22:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:16.972 22:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:16.972 22:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.972 22:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:13:16.972 22:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.972 22:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.972 22:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.972 22:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:16.972 22:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:17.229 00:13:17.229 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.229 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.229 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.486 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.486 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.486 22:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.486 22:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.486 22:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.486 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.486 { 00:13:17.486 "auth": { 00:13:17.486 "dhgroup": "ffdhe6144", 00:13:17.486 "digest": "sha256", 00:13:17.486 "state": "completed" 00:13:17.486 }, 00:13:17.486 "cntlid": 39, 00:13:17.486 "listen_address": { 00:13:17.486 "adrfam": "IPv4", 00:13:17.486 "traddr": "10.0.0.2", 00:13:17.486 "trsvcid": "4420", 00:13:17.486 "trtype": "TCP" 00:13:17.486 }, 00:13:17.486 "peer_address": { 00:13:17.486 "adrfam": "IPv4", 00:13:17.486 "traddr": "10.0.0.1", 00:13:17.486 "trsvcid": "39404", 00:13:17.486 "trtype": "TCP" 00:13:17.486 }, 00:13:17.486 "qid": 0, 00:13:17.486 "state": "enabled", 00:13:17.486 "thread": "nvmf_tgt_poll_group_000" 00:13:17.486 } 00:13:17.486 ]' 00:13:17.486 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.486 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.486 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.745 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:17.745 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.745 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.745 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.745 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.003 22:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.938 22:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.873 00:13:19.873 22:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.873 22:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.873 22:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.131 22:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.131 22:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.131 22:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.131 22:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.131 22:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.131 22:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:20.131 { 00:13:20.131 "auth": { 00:13:20.131 "dhgroup": "ffdhe8192", 00:13:20.131 "digest": "sha256", 00:13:20.131 "state": "completed" 00:13:20.131 }, 00:13:20.131 "cntlid": 41, 00:13:20.131 "listen_address": { 00:13:20.131 "adrfam": "IPv4", 00:13:20.131 "traddr": "10.0.0.2", 00:13:20.131 "trsvcid": "4420", 00:13:20.131 "trtype": "TCP" 00:13:20.131 }, 00:13:20.131 "peer_address": { 00:13:20.131 "adrfam": "IPv4", 00:13:20.131 "traddr": "10.0.0.1", 00:13:20.131 "trsvcid": "39424", 00:13:20.131 "trtype": "TCP" 00:13:20.131 }, 00:13:20.131 "qid": 0, 00:13:20.131 "state": "enabled", 00:13:20.131 "thread": "nvmf_tgt_poll_group_000" 00:13:20.131 } 00:13:20.131 ]' 00:13:20.131 22:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:20.131 22:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:20.131 22:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:20.131 22:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:20.131 22:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:20.131 22:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.131 22:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.131 22:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.697 22:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:13:21.264 22:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.264 22:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:21.264 22:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.264 22:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.264 22:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.264 22:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:21.264 22:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:21.264 22:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:21.522 22:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:13:21.522 22:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:21.522 22:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:21.522 22:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:21.522 22:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:21.522 22:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.522 22:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.522 22:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.522 22:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.522 22:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.522 22:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.522 22:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.088 00:13:22.347 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:22.347 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.347 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:22.605 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.605 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.605 22:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.605 22:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.605 22:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.605 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:22.605 { 00:13:22.605 "auth": { 00:13:22.605 "dhgroup": "ffdhe8192", 00:13:22.605 "digest": "sha256", 00:13:22.605 "state": "completed" 00:13:22.605 }, 00:13:22.605 "cntlid": 43, 00:13:22.605 "listen_address": { 00:13:22.605 "adrfam": "IPv4", 00:13:22.605 "traddr": "10.0.0.2", 00:13:22.605 "trsvcid": "4420", 00:13:22.605 "trtype": "TCP" 00:13:22.605 }, 00:13:22.605 "peer_address": { 00:13:22.605 "adrfam": "IPv4", 00:13:22.605 "traddr": "10.0.0.1", 00:13:22.605 "trsvcid": "39464", 00:13:22.605 "trtype": "TCP" 00:13:22.605 }, 00:13:22.605 "qid": 0, 00:13:22.605 "state": "enabled", 00:13:22.605 "thread": "nvmf_tgt_poll_group_000" 00:13:22.605 } 00:13:22.605 ]' 00:13:22.605 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:22.605 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:22.605 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:22.605 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:22.605 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:22.605 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.605 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.605 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.172 22:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:13:23.739 22:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.739 22:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:23.739 22:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.739 22:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.739 22:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.739 22:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.739 22:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:23.739 22:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:23.997 22:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:13:23.997 22:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.997 22:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:23.997 22:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:23.997 22:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:23.997 22:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.997 22:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.997 22:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.997 22:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.997 22:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.997 22:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.997 22:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.574 00:13:24.574 22:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.574 22:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.574 22:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.142 22:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.142 22:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.142 22:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.142 22:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.142 22:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.142 22:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:25.142 { 00:13:25.142 "auth": { 00:13:25.142 "dhgroup": "ffdhe8192", 00:13:25.142 "digest": "sha256", 00:13:25.142 "state": "completed" 00:13:25.142 }, 00:13:25.142 "cntlid": 45, 00:13:25.142 "listen_address": { 00:13:25.142 "adrfam": "IPv4", 00:13:25.142 "traddr": "10.0.0.2", 00:13:25.142 "trsvcid": "4420", 00:13:25.142 "trtype": "TCP" 00:13:25.142 }, 00:13:25.142 "peer_address": { 00:13:25.142 "adrfam": "IPv4", 00:13:25.142 "traddr": "10.0.0.1", 00:13:25.142 "trsvcid": "38812", 00:13:25.142 "trtype": "TCP" 00:13:25.142 }, 00:13:25.142 "qid": 0, 00:13:25.142 "state": "enabled", 00:13:25.142 "thread": "nvmf_tgt_poll_group_000" 00:13:25.142 } 00:13:25.142 ]' 00:13:25.142 22:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:25.142 22:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:25.142 22:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:25.142 22:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:25.142 22:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:25.142 22:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.142 22:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.142 22:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.404 22:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:13:26.335 22:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.335 22:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:26.335 22:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.335 22:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.335 22:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.335 22:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:26.335 22:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:26.335 22:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:26.592 22:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:13:26.592 22:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:26.592 22:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:26.592 22:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:26.592 22:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:26.592 22:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.592 22:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:13:26.592 22:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.592 22:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.592 22:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.592 22:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:26.592 22:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.157 00:13:27.157 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:27.157 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:27.157 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.722 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.722 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.722 22:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.722 22:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.722 22:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.722 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:27.722 { 00:13:27.722 "auth": { 00:13:27.722 "dhgroup": "ffdhe8192", 00:13:27.722 "digest": "sha256", 00:13:27.722 "state": "completed" 00:13:27.722 }, 00:13:27.722 "cntlid": 47, 00:13:27.722 "listen_address": { 00:13:27.722 "adrfam": "IPv4", 00:13:27.722 "traddr": "10.0.0.2", 00:13:27.722 "trsvcid": "4420", 00:13:27.722 "trtype": "TCP" 00:13:27.722 }, 00:13:27.722 "peer_address": { 00:13:27.722 "adrfam": "IPv4", 00:13:27.722 "traddr": "10.0.0.1", 00:13:27.722 "trsvcid": "38836", 00:13:27.722 "trtype": "TCP" 00:13:27.722 }, 00:13:27.722 "qid": 0, 00:13:27.722 "state": "enabled", 00:13:27.722 "thread": "nvmf_tgt_poll_group_000" 00:13:27.722 } 00:13:27.722 ]' 00:13:27.722 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:27.722 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:27.722 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:27.722 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:27.722 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:27.722 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.722 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.722 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.980 22:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:13:28.914 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.914 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:28.914 22:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.914 22:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.914 22:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.914 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:28.914 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:28.914 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:28.914 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:28.914 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:28.914 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:13:28.914 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:28.915 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:28.915 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:28.915 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:28.915 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.915 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.915 22:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.915 22:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.915 22:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.915 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.915 22:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.481 00:13:29.481 22:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:29.481 22:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.481 22:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:29.739 22:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.739 22:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.739 22:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.739 22:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.739 22:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.739 22:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:29.739 { 00:13:29.739 "auth": { 00:13:29.739 "dhgroup": "null", 00:13:29.739 "digest": "sha384", 00:13:29.739 "state": "completed" 00:13:29.739 }, 00:13:29.739 "cntlid": 49, 00:13:29.739 "listen_address": { 00:13:29.739 "adrfam": "IPv4", 00:13:29.739 "traddr": "10.0.0.2", 00:13:29.739 "trsvcid": "4420", 00:13:29.739 "trtype": "TCP" 00:13:29.739 }, 00:13:29.739 "peer_address": { 00:13:29.739 "adrfam": "IPv4", 00:13:29.739 "traddr": "10.0.0.1", 00:13:29.739 "trsvcid": "38870", 00:13:29.739 "trtype": "TCP" 00:13:29.739 }, 00:13:29.739 "qid": 0, 00:13:29.739 "state": "enabled", 00:13:29.739 "thread": "nvmf_tgt_poll_group_000" 00:13:29.739 } 00:13:29.739 ]' 00:13:29.739 22:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:29.739 22:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:29.739 22:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:29.739 22:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:29.739 22:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:29.997 22:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.997 22:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.997 22:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.255 22:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:13:30.820 22:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.820 22:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:30.820 22:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.820 22:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.820 22:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.820 22:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:30.820 22:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:30.820 22:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:31.384 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:13:31.384 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:31.384 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:31.384 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:31.384 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:31.384 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.384 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.384 22:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.384 22:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.384 22:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.384 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.384 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.643 00:13:31.643 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:31.643 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:31.643 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.900 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.900 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.900 22:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.900 22:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.900 22:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.900 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:31.900 { 00:13:31.900 "auth": { 00:13:31.900 "dhgroup": "null", 00:13:31.900 "digest": "sha384", 00:13:31.900 "state": "completed" 00:13:31.900 }, 00:13:31.900 "cntlid": 51, 00:13:31.900 "listen_address": { 00:13:31.900 "adrfam": "IPv4", 00:13:31.900 "traddr": "10.0.0.2", 00:13:31.900 "trsvcid": "4420", 00:13:31.900 "trtype": "TCP" 00:13:31.900 }, 00:13:31.900 "peer_address": { 00:13:31.900 "adrfam": "IPv4", 00:13:31.900 "traddr": "10.0.0.1", 00:13:31.900 "trsvcid": "38900", 00:13:31.900 "trtype": "TCP" 00:13:31.900 }, 00:13:31.900 "qid": 0, 00:13:31.900 "state": "enabled", 00:13:31.900 "thread": "nvmf_tgt_poll_group_000" 00:13:31.900 } 00:13:31.900 ]' 00:13:31.900 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:31.901 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:31.901 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:32.158 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:32.158 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:32.158 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.158 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.158 22:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.416 22:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:13:33.011 22:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.012 22:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:33.012 22:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.012 22:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.012 22:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.012 22:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:33.012 22:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:33.012 22:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:33.269 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:13:33.269 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.269 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:33.269 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:33.269 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:33.269 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.269 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.269 22:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.269 22:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.269 22:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.269 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.269 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.835 00:13:33.835 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:33.835 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.835 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.093 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.093 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.093 22:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.093 22:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.093 22:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.093 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.093 { 00:13:34.093 "auth": { 00:13:34.093 "dhgroup": "null", 00:13:34.093 "digest": "sha384", 00:13:34.093 "state": "completed" 00:13:34.093 }, 00:13:34.093 "cntlid": 53, 00:13:34.093 "listen_address": { 00:13:34.093 "adrfam": "IPv4", 00:13:34.093 "traddr": "10.0.0.2", 00:13:34.093 "trsvcid": "4420", 00:13:34.093 "trtype": "TCP" 00:13:34.093 }, 00:13:34.093 "peer_address": { 00:13:34.093 "adrfam": "IPv4", 00:13:34.093 "traddr": "10.0.0.1", 00:13:34.093 "trsvcid": "53646", 00:13:34.093 "trtype": "TCP" 00:13:34.093 }, 00:13:34.093 "qid": 0, 00:13:34.093 "state": "enabled", 00:13:34.093 "thread": "nvmf_tgt_poll_group_000" 00:13:34.093 } 00:13:34.093 ]' 00:13:34.093 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.093 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:34.093 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.093 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:34.093 22:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.093 22:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.093 22:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.093 22:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.658 22:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:13:35.223 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.223 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:35.223 22:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.223 22:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.223 22:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.223 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:35.223 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:35.223 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:35.491 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:13:35.491 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:35.491 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:35.491 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:35.491 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:35.491 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.491 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:13:35.491 22:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.491 22:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.491 22:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.491 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:35.491 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.055 00:13:36.055 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:36.055 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.055 22:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:36.315 22:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.315 22:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.315 22:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.315 22:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.315 22:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.315 22:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:36.315 { 00:13:36.315 "auth": { 00:13:36.315 "dhgroup": "null", 00:13:36.315 "digest": "sha384", 00:13:36.315 "state": "completed" 00:13:36.315 }, 00:13:36.315 "cntlid": 55, 00:13:36.315 "listen_address": { 00:13:36.315 "adrfam": "IPv4", 00:13:36.315 "traddr": "10.0.0.2", 00:13:36.315 "trsvcid": "4420", 00:13:36.315 "trtype": "TCP" 00:13:36.315 }, 00:13:36.315 "peer_address": { 00:13:36.315 "adrfam": "IPv4", 00:13:36.315 "traddr": "10.0.0.1", 00:13:36.315 "trsvcid": "53676", 00:13:36.315 "trtype": "TCP" 00:13:36.315 }, 00:13:36.315 "qid": 0, 00:13:36.315 "state": "enabled", 00:13:36.315 "thread": "nvmf_tgt_poll_group_000" 00:13:36.315 } 00:13:36.315 ]' 00:13:36.315 22:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:36.315 22:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:36.315 22:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:36.315 22:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:36.315 22:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:36.575 22:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.575 22:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.575 22:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.833 22:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.766 22:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.332 00:13:38.332 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:38.332 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:38.332 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.590 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.590 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.590 22:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.590 22:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.590 22:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.590 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:38.590 { 00:13:38.590 "auth": { 00:13:38.590 "dhgroup": "ffdhe2048", 00:13:38.590 "digest": "sha384", 00:13:38.590 "state": "completed" 00:13:38.590 }, 00:13:38.590 "cntlid": 57, 00:13:38.590 "listen_address": { 00:13:38.590 "adrfam": "IPv4", 00:13:38.590 "traddr": "10.0.0.2", 00:13:38.590 "trsvcid": "4420", 00:13:38.590 "trtype": "TCP" 00:13:38.590 }, 00:13:38.590 "peer_address": { 00:13:38.590 "adrfam": "IPv4", 00:13:38.590 "traddr": "10.0.0.1", 00:13:38.590 "trsvcid": "53706", 00:13:38.590 "trtype": "TCP" 00:13:38.590 }, 00:13:38.590 "qid": 0, 00:13:38.590 "state": "enabled", 00:13:38.590 "thread": "nvmf_tgt_poll_group_000" 00:13:38.590 } 00:13:38.590 ]' 00:13:38.590 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:38.590 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:38.590 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:38.590 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:38.590 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:38.590 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.590 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.590 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.154 22:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:13:39.720 22:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.720 22:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:39.720 22:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.720 22:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.720 22:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.720 22:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:39.720 22:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:39.720 22:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:39.977 22:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:13:39.977 22:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:39.977 22:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:39.977 22:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:39.977 22:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:39.977 22:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.977 22:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.977 22:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.977 22:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.977 22:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.977 22:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.977 22:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.554 00:13:40.554 22:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:40.554 22:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:40.554 22:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.825 22:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.825 22:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.825 22:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.825 22:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.825 22:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.825 22:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:40.825 { 00:13:40.825 "auth": { 00:13:40.825 "dhgroup": "ffdhe2048", 00:13:40.825 "digest": "sha384", 00:13:40.825 "state": "completed" 00:13:40.825 }, 00:13:40.825 "cntlid": 59, 00:13:40.825 "listen_address": { 00:13:40.825 "adrfam": "IPv4", 00:13:40.825 "traddr": "10.0.0.2", 00:13:40.825 "trsvcid": "4420", 00:13:40.825 "trtype": "TCP" 00:13:40.825 }, 00:13:40.825 "peer_address": { 00:13:40.825 "adrfam": "IPv4", 00:13:40.825 "traddr": "10.0.0.1", 00:13:40.825 "trsvcid": "53722", 00:13:40.825 "trtype": "TCP" 00:13:40.825 }, 00:13:40.825 "qid": 0, 00:13:40.825 "state": "enabled", 00:13:40.825 "thread": "nvmf_tgt_poll_group_000" 00:13:40.825 } 00:13:40.825 ]' 00:13:40.825 22:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:40.825 22:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:40.825 22:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.083 22:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:41.083 22:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.083 22:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.083 22:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.083 22:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.649 22:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:13:42.214 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.214 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:42.214 22:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.214 22:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.214 22:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.214 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.214 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:42.214 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:42.778 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:13:42.778 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:42.778 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:42.778 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:42.778 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:42.778 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.778 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.778 22:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.778 22:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.778 22:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.778 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.778 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.036 00:13:43.036 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.036 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.036 22:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.601 22:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.601 22:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.601 22:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.601 22:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.601 22:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.601 22:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.601 { 00:13:43.601 "auth": { 00:13:43.601 "dhgroup": "ffdhe2048", 00:13:43.601 "digest": "sha384", 00:13:43.601 "state": "completed" 00:13:43.601 }, 00:13:43.601 "cntlid": 61, 00:13:43.601 "listen_address": { 00:13:43.601 "adrfam": "IPv4", 00:13:43.601 "traddr": "10.0.0.2", 00:13:43.601 "trsvcid": "4420", 00:13:43.601 "trtype": "TCP" 00:13:43.601 }, 00:13:43.601 "peer_address": { 00:13:43.601 "adrfam": "IPv4", 00:13:43.601 "traddr": "10.0.0.1", 00:13:43.601 "trsvcid": "53754", 00:13:43.601 "trtype": "TCP" 00:13:43.601 }, 00:13:43.601 "qid": 0, 00:13:43.601 "state": "enabled", 00:13:43.601 "thread": "nvmf_tgt_poll_group_000" 00:13:43.601 } 00:13:43.601 ]' 00:13:43.601 22:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.601 22:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:43.601 22:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.601 22:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:43.601 22:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.601 22:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.601 22:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.601 22:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.167 22:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:13:44.734 22:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.734 22:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:44.734 22:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.734 22:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.734 22:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.734 22:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.734 22:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:44.734 22:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:44.991 22:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:13:44.991 22:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.991 22:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:44.991 22:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:44.991 22:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:44.991 22:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.991 22:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:13:44.991 22:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.991 22:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.991 22:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.250 22:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:45.250 22:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:45.508 00:13:45.508 22:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.508 22:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:45.508 22:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.765 22:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.766 22:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.766 22:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.766 22:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.766 22:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.766 22:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:45.766 { 00:13:45.766 "auth": { 00:13:45.766 "dhgroup": "ffdhe2048", 00:13:45.766 "digest": "sha384", 00:13:45.766 "state": "completed" 00:13:45.766 }, 00:13:45.766 "cntlid": 63, 00:13:45.766 "listen_address": { 00:13:45.766 "adrfam": "IPv4", 00:13:45.766 "traddr": "10.0.0.2", 00:13:45.766 "trsvcid": "4420", 00:13:45.766 "trtype": "TCP" 00:13:45.766 }, 00:13:45.766 "peer_address": { 00:13:45.766 "adrfam": "IPv4", 00:13:45.766 "traddr": "10.0.0.1", 00:13:45.766 "trsvcid": "45314", 00:13:45.766 "trtype": "TCP" 00:13:45.766 }, 00:13:45.766 "qid": 0, 00:13:45.766 "state": "enabled", 00:13:45.766 "thread": "nvmf_tgt_poll_group_000" 00:13:45.766 } 00:13:45.766 ]' 00:13:45.766 22:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:46.023 22:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:46.023 22:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:46.023 22:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:46.023 22:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:46.023 22:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.023 22:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.023 22:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.280 22:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:13:47.209 22:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.209 22:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:47.209 22:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.209 22:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.209 22:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.209 22:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:47.209 22:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:47.209 22:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:47.209 22:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:47.468 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:13:47.468 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:47.468 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:47.468 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:47.468 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:47.468 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.468 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.468 22:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.468 22:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.468 22:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.468 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.468 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.725 00:13:47.726 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:47.726 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:47.726 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.982 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.982 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.982 22:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.982 22:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.982 22:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.982 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:47.982 { 00:13:47.982 "auth": { 00:13:47.982 "dhgroup": "ffdhe3072", 00:13:47.982 "digest": "sha384", 00:13:47.982 "state": "completed" 00:13:47.982 }, 00:13:47.982 "cntlid": 65, 00:13:47.982 "listen_address": { 00:13:47.982 "adrfam": "IPv4", 00:13:47.982 "traddr": "10.0.0.2", 00:13:47.982 "trsvcid": "4420", 00:13:47.982 "trtype": "TCP" 00:13:47.982 }, 00:13:47.982 "peer_address": { 00:13:47.982 "adrfam": "IPv4", 00:13:47.982 "traddr": "10.0.0.1", 00:13:47.982 "trsvcid": "45356", 00:13:47.982 "trtype": "TCP" 00:13:47.982 }, 00:13:47.982 "qid": 0, 00:13:47.982 "state": "enabled", 00:13:47.982 "thread": "nvmf_tgt_poll_group_000" 00:13:47.982 } 00:13:47.982 ]' 00:13:47.982 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:47.982 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:47.982 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:48.238 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:48.238 22:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:48.238 22:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.238 22:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.238 22:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.494 22:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:13:49.060 22:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.060 22:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:49.060 22:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.060 22:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.060 22:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.060 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:49.060 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:49.060 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:49.624 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:13:49.624 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:49.624 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:49.624 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:49.624 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:49.624 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.624 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.625 22:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.625 22:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.625 22:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.625 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.625 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.882 00:13:49.882 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:49.882 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.882 22:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:50.140 22:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.140 22:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.140 22:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.140 22:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.140 22:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.140 22:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:50.140 { 00:13:50.140 "auth": { 00:13:50.140 "dhgroup": "ffdhe3072", 00:13:50.140 "digest": "sha384", 00:13:50.140 "state": "completed" 00:13:50.140 }, 00:13:50.140 "cntlid": 67, 00:13:50.140 "listen_address": { 00:13:50.140 "adrfam": "IPv4", 00:13:50.140 "traddr": "10.0.0.2", 00:13:50.140 "trsvcid": "4420", 00:13:50.140 "trtype": "TCP" 00:13:50.140 }, 00:13:50.140 "peer_address": { 00:13:50.140 "adrfam": "IPv4", 00:13:50.140 "traddr": "10.0.0.1", 00:13:50.140 "trsvcid": "45404", 00:13:50.140 "trtype": "TCP" 00:13:50.140 }, 00:13:50.140 "qid": 0, 00:13:50.140 "state": "enabled", 00:13:50.140 "thread": "nvmf_tgt_poll_group_000" 00:13:50.140 } 00:13:50.140 ]' 00:13:50.140 22:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:50.398 22:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:50.398 22:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:50.398 22:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:50.398 22:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:50.398 22:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.398 22:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.398 22:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.964 22:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:13:51.529 22:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.529 22:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:51.529 22:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.529 22:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.529 22:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.529 22:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:51.529 22:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:51.529 22:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:51.788 22:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:13:51.788 22:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:51.788 22:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:51.788 22:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:51.788 22:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:51.788 22:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.788 22:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.788 22:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.788 22:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.788 22:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.788 22:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.788 22:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.354 00:13:52.354 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:52.354 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:52.354 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.612 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.612 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.612 22:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.612 22:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.612 22:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.612 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:52.612 { 00:13:52.612 "auth": { 00:13:52.612 "dhgroup": "ffdhe3072", 00:13:52.612 "digest": "sha384", 00:13:52.612 "state": "completed" 00:13:52.612 }, 00:13:52.612 "cntlid": 69, 00:13:52.612 "listen_address": { 00:13:52.612 "adrfam": "IPv4", 00:13:52.612 "traddr": "10.0.0.2", 00:13:52.612 "trsvcid": "4420", 00:13:52.612 "trtype": "TCP" 00:13:52.612 }, 00:13:52.612 "peer_address": { 00:13:52.612 "adrfam": "IPv4", 00:13:52.612 "traddr": "10.0.0.1", 00:13:52.612 "trsvcid": "45450", 00:13:52.612 "trtype": "TCP" 00:13:52.612 }, 00:13:52.612 "qid": 0, 00:13:52.612 "state": "enabled", 00:13:52.612 "thread": "nvmf_tgt_poll_group_000" 00:13:52.612 } 00:13:52.612 ]' 00:13:52.612 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:52.612 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:52.612 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:52.612 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:52.612 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:52.612 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.612 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.612 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.178 22:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:13:54.111 22:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.111 22:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:54.111 22:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.111 22:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.111 22:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.111 22:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.111 22:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:54.111 22:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:54.389 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:13:54.389 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.389 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:54.390 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:54.390 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:54.390 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.390 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:13:54.390 22:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.390 22:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.390 22:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.390 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:54.390 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:54.654 00:13:54.654 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.654 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:54.654 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.220 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.220 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.220 22:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.220 22:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.220 22:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.220 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.220 { 00:13:55.220 "auth": { 00:13:55.220 "dhgroup": "ffdhe3072", 00:13:55.220 "digest": "sha384", 00:13:55.220 "state": "completed" 00:13:55.220 }, 00:13:55.220 "cntlid": 71, 00:13:55.220 "listen_address": { 00:13:55.220 "adrfam": "IPv4", 00:13:55.220 "traddr": "10.0.0.2", 00:13:55.220 "trsvcid": "4420", 00:13:55.220 "trtype": "TCP" 00:13:55.220 }, 00:13:55.220 "peer_address": { 00:13:55.220 "adrfam": "IPv4", 00:13:55.220 "traddr": "10.0.0.1", 00:13:55.220 "trsvcid": "50164", 00:13:55.220 "trtype": "TCP" 00:13:55.220 }, 00:13:55.220 "qid": 0, 00:13:55.220 "state": "enabled", 00:13:55.220 "thread": "nvmf_tgt_poll_group_000" 00:13:55.220 } 00:13:55.220 ]' 00:13:55.220 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.220 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:55.220 22:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.220 22:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:55.220 22:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.220 22:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.220 22:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.220 22:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.477 22:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:13:56.419 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.419 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:56.419 22:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.419 22:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.419 22:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.419 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:56.419 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.419 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:56.419 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:56.680 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:13:56.680 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.680 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:56.680 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:56.680 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:56.680 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.680 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.680 22:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.680 22:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.680 22:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.680 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.680 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.938 00:13:56.938 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.938 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.938 22:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:57.503 22:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.503 22:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.503 22:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.503 22:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.503 22:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.503 22:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.503 { 00:13:57.503 "auth": { 00:13:57.503 "dhgroup": "ffdhe4096", 00:13:57.503 "digest": "sha384", 00:13:57.503 "state": "completed" 00:13:57.503 }, 00:13:57.503 "cntlid": 73, 00:13:57.503 "listen_address": { 00:13:57.503 "adrfam": "IPv4", 00:13:57.503 "traddr": "10.0.0.2", 00:13:57.503 "trsvcid": "4420", 00:13:57.503 "trtype": "TCP" 00:13:57.503 }, 00:13:57.503 "peer_address": { 00:13:57.503 "adrfam": "IPv4", 00:13:57.503 "traddr": "10.0.0.1", 00:13:57.503 "trsvcid": "50184", 00:13:57.503 "trtype": "TCP" 00:13:57.503 }, 00:13:57.503 "qid": 0, 00:13:57.503 "state": "enabled", 00:13:57.503 "thread": "nvmf_tgt_poll_group_000" 00:13:57.503 } 00:13:57.503 ]' 00:13:57.503 22:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.503 22:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:57.503 22:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.503 22:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:57.503 22:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.503 22:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.503 22:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.503 22:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.760 22:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:13:58.694 22:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.694 22:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:13:58.694 22:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.694 22:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.694 22:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.694 22:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.694 22:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:58.694 22:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:58.954 22:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:13:58.954 22:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.954 22:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:58.954 22:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:58.954 22:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:58.954 22:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.954 22:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.954 22:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.954 22:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.954 22:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.954 22:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.954 22:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.530 00:13:59.530 22:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.530 22:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.530 22:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.788 22:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.788 22:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.788 22:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.788 22:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.788 22:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.788 22:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.788 { 00:13:59.788 "auth": { 00:13:59.788 "dhgroup": "ffdhe4096", 00:13:59.788 "digest": "sha384", 00:13:59.788 "state": "completed" 00:13:59.788 }, 00:13:59.788 "cntlid": 75, 00:13:59.788 "listen_address": { 00:13:59.788 "adrfam": "IPv4", 00:13:59.788 "traddr": "10.0.0.2", 00:13:59.788 "trsvcid": "4420", 00:13:59.788 "trtype": "TCP" 00:13:59.788 }, 00:13:59.788 "peer_address": { 00:13:59.788 "adrfam": "IPv4", 00:13:59.788 "traddr": "10.0.0.1", 00:13:59.788 "trsvcid": "50216", 00:13:59.788 "trtype": "TCP" 00:13:59.788 }, 00:13:59.788 "qid": 0, 00:13:59.788 "state": "enabled", 00:13:59.788 "thread": "nvmf_tgt_poll_group_000" 00:13:59.788 } 00:13:59.788 ]' 00:13:59.788 22:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.788 22:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:59.788 22:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.046 22:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:00.046 22:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.046 22:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.046 22:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.046 22:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.304 22:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:14:01.238 22:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.238 22:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:01.238 22:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.238 22:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.238 22:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.238 22:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.238 22:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:01.238 22:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:01.496 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:14:01.496 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.496 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:01.496 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:01.496 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:01.496 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.496 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.496 22:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.496 22:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.496 22:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.496 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.496 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.754 00:14:01.754 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.754 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.754 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.319 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.319 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.319 22:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.319 22:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.319 22:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.319 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.319 { 00:14:02.319 "auth": { 00:14:02.319 "dhgroup": "ffdhe4096", 00:14:02.319 "digest": "sha384", 00:14:02.319 "state": "completed" 00:14:02.319 }, 00:14:02.319 "cntlid": 77, 00:14:02.319 "listen_address": { 00:14:02.319 "adrfam": "IPv4", 00:14:02.319 "traddr": "10.0.0.2", 00:14:02.319 "trsvcid": "4420", 00:14:02.319 "trtype": "TCP" 00:14:02.319 }, 00:14:02.319 "peer_address": { 00:14:02.319 "adrfam": "IPv4", 00:14:02.319 "traddr": "10.0.0.1", 00:14:02.319 "trsvcid": "50250", 00:14:02.319 "trtype": "TCP" 00:14:02.319 }, 00:14:02.319 "qid": 0, 00:14:02.319 "state": "enabled", 00:14:02.319 "thread": "nvmf_tgt_poll_group_000" 00:14:02.319 } 00:14:02.319 ]' 00:14:02.319 22:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.319 22:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.319 22:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.319 22:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:02.319 22:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.319 22:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.319 22:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.319 22:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.577 22:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:14:03.510 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.510 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:03.510 22:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.510 22:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.510 22:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.510 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.510 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:03.510 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:03.768 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:14:03.768 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.768 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:03.768 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:03.768 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:03.768 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.768 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:14:03.768 22:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.768 22:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.768 22:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.768 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:03.768 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:04.026 00:14:04.284 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:04.284 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.284 22:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.543 22:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.543 22:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.543 22:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.543 22:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.543 22:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.543 22:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.543 { 00:14:04.543 "auth": { 00:14:04.543 "dhgroup": "ffdhe4096", 00:14:04.543 "digest": "sha384", 00:14:04.543 "state": "completed" 00:14:04.543 }, 00:14:04.543 "cntlid": 79, 00:14:04.543 "listen_address": { 00:14:04.543 "adrfam": "IPv4", 00:14:04.543 "traddr": "10.0.0.2", 00:14:04.543 "trsvcid": "4420", 00:14:04.543 "trtype": "TCP" 00:14:04.543 }, 00:14:04.543 "peer_address": { 00:14:04.543 "adrfam": "IPv4", 00:14:04.543 "traddr": "10.0.0.1", 00:14:04.543 "trsvcid": "47004", 00:14:04.543 "trtype": "TCP" 00:14:04.543 }, 00:14:04.543 "qid": 0, 00:14:04.543 "state": "enabled", 00:14:04.543 "thread": "nvmf_tgt_poll_group_000" 00:14:04.543 } 00:14:04.543 ]' 00:14:04.543 22:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.543 22:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:04.543 22:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.543 22:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:04.543 22:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.543 22:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.543 22:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.543 22:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.112 22:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:14:05.688 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.688 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:05.688 22:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.688 22:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.688 22:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.688 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:05.688 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:05.688 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:05.688 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:05.952 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:14:05.952 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.952 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:05.952 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:05.952 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:05.952 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.952 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.952 22:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.952 22:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.952 22:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.952 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.952 22:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.515 00:14:06.515 22:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.516 22:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:06.516 22:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.078 22:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.078 22:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.078 22:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.078 22:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.078 22:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.078 22:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.078 { 00:14:07.078 "auth": { 00:14:07.078 "dhgroup": "ffdhe6144", 00:14:07.078 "digest": "sha384", 00:14:07.078 "state": "completed" 00:14:07.078 }, 00:14:07.078 "cntlid": 81, 00:14:07.078 "listen_address": { 00:14:07.078 "adrfam": "IPv4", 00:14:07.078 "traddr": "10.0.0.2", 00:14:07.078 "trsvcid": "4420", 00:14:07.078 "trtype": "TCP" 00:14:07.078 }, 00:14:07.078 "peer_address": { 00:14:07.078 "adrfam": "IPv4", 00:14:07.078 "traddr": "10.0.0.1", 00:14:07.078 "trsvcid": "47042", 00:14:07.078 "trtype": "TCP" 00:14:07.078 }, 00:14:07.078 "qid": 0, 00:14:07.078 "state": "enabled", 00:14:07.078 "thread": "nvmf_tgt_poll_group_000" 00:14:07.078 } 00:14:07.078 ]' 00:14:07.078 22:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.078 22:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:07.078 22:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.078 22:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:07.078 22:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.078 22:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.078 22:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.078 22:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.335 22:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:14:08.265 22:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.265 22:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:08.265 22:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.265 22:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.265 22:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.265 22:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.265 22:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:08.265 22:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:08.542 22:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:14:08.542 22:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.542 22:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:08.542 22:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:08.542 22:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:08.542 22:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.542 22:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.542 22:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.542 22:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.542 22:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.542 22:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.542 22:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.800 00:14:08.800 22:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.800 22:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.800 22:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.363 22:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.363 22:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.363 22:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.363 22:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.363 22:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.363 22:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.363 { 00:14:09.363 "auth": { 00:14:09.363 "dhgroup": "ffdhe6144", 00:14:09.363 "digest": "sha384", 00:14:09.363 "state": "completed" 00:14:09.363 }, 00:14:09.363 "cntlid": 83, 00:14:09.363 "listen_address": { 00:14:09.363 "adrfam": "IPv4", 00:14:09.363 "traddr": "10.0.0.2", 00:14:09.363 "trsvcid": "4420", 00:14:09.363 "trtype": "TCP" 00:14:09.363 }, 00:14:09.363 "peer_address": { 00:14:09.363 "adrfam": "IPv4", 00:14:09.363 "traddr": "10.0.0.1", 00:14:09.363 "trsvcid": "47070", 00:14:09.363 "trtype": "TCP" 00:14:09.363 }, 00:14:09.363 "qid": 0, 00:14:09.363 "state": "enabled", 00:14:09.363 "thread": "nvmf_tgt_poll_group_000" 00:14:09.363 } 00:14:09.363 ]' 00:14:09.363 22:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.363 22:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:09.363 22:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.363 22:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:09.363 22:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.363 22:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.363 22:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.363 22:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.926 22:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:14:10.491 22:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.491 22:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:10.491 22:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.491 22:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.491 22:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.491 22:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.491 22:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:10.491 22:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:10.748 22:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:14:10.748 22:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.748 22:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:10.748 22:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:10.748 22:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:10.748 22:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.748 22:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.748 22:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.748 22:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.748 22:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.748 22:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.748 22:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.314 00:14:11.314 22:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.314 22:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.314 22:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.879 22:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.879 22:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.879 22:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.879 22:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.879 22:08:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.879 22:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.879 { 00:14:11.879 "auth": { 00:14:11.879 "dhgroup": "ffdhe6144", 00:14:11.879 "digest": "sha384", 00:14:11.879 "state": "completed" 00:14:11.879 }, 00:14:11.879 "cntlid": 85, 00:14:11.879 "listen_address": { 00:14:11.879 "adrfam": "IPv4", 00:14:11.879 "traddr": "10.0.0.2", 00:14:11.879 "trsvcid": "4420", 00:14:11.879 "trtype": "TCP" 00:14:11.879 }, 00:14:11.879 "peer_address": { 00:14:11.879 "adrfam": "IPv4", 00:14:11.879 "traddr": "10.0.0.1", 00:14:11.879 "trsvcid": "47100", 00:14:11.879 "trtype": "TCP" 00:14:11.879 }, 00:14:11.879 "qid": 0, 00:14:11.879 "state": "enabled", 00:14:11.879 "thread": "nvmf_tgt_poll_group_000" 00:14:11.879 } 00:14:11.879 ]' 00:14:11.879 22:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.879 22:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:11.879 22:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.879 22:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:11.879 22:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.879 22:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.879 22:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.879 22:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.137 22:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:14:13.071 22:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.071 22:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:13.071 22:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.071 22:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.071 22:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.071 22:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.071 22:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:13.071 22:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:13.329 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:14:13.329 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.329 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:13.329 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:13.329 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:13.329 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.329 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:14:13.329 22:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.329 22:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.329 22:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.329 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:13.329 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:13.896 00:14:13.896 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.896 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.896 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.161 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.161 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.161 22:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.161 22:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.161 22:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.161 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.161 { 00:14:14.161 "auth": { 00:14:14.161 "dhgroup": "ffdhe6144", 00:14:14.161 "digest": "sha384", 00:14:14.161 "state": "completed" 00:14:14.161 }, 00:14:14.161 "cntlid": 87, 00:14:14.161 "listen_address": { 00:14:14.161 "adrfam": "IPv4", 00:14:14.161 "traddr": "10.0.0.2", 00:14:14.161 "trsvcid": "4420", 00:14:14.162 "trtype": "TCP" 00:14:14.162 }, 00:14:14.162 "peer_address": { 00:14:14.162 "adrfam": "IPv4", 00:14:14.162 "traddr": "10.0.0.1", 00:14:14.162 "trsvcid": "57954", 00:14:14.162 "trtype": "TCP" 00:14:14.162 }, 00:14:14.162 "qid": 0, 00:14:14.162 "state": "enabled", 00:14:14.162 "thread": "nvmf_tgt_poll_group_000" 00:14:14.162 } 00:14:14.162 ]' 00:14:14.162 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.162 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:14.162 22:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.162 22:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:14.162 22:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.162 22:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.162 22:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.162 22:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.424 22:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:14:15.357 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.357 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:15.357 22:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.357 22:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.357 22:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.357 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:15.357 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.357 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:15.357 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:15.615 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:14:15.615 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.615 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:15.615 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:15.615 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:15.615 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.615 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.615 22:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.615 22:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.615 22:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.615 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.615 22:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.179 00:14:16.179 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.179 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.179 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.436 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.436 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.436 22:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.436 22:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.436 22:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.436 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.436 { 00:14:16.436 "auth": { 00:14:16.436 "dhgroup": "ffdhe8192", 00:14:16.436 "digest": "sha384", 00:14:16.436 "state": "completed" 00:14:16.436 }, 00:14:16.436 "cntlid": 89, 00:14:16.436 "listen_address": { 00:14:16.436 "adrfam": "IPv4", 00:14:16.436 "traddr": "10.0.0.2", 00:14:16.436 "trsvcid": "4420", 00:14:16.436 "trtype": "TCP" 00:14:16.436 }, 00:14:16.436 "peer_address": { 00:14:16.436 "adrfam": "IPv4", 00:14:16.436 "traddr": "10.0.0.1", 00:14:16.436 "trsvcid": "57970", 00:14:16.436 "trtype": "TCP" 00:14:16.436 }, 00:14:16.436 "qid": 0, 00:14:16.436 "state": "enabled", 00:14:16.436 "thread": "nvmf_tgt_poll_group_000" 00:14:16.436 } 00:14:16.436 ]' 00:14:16.436 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.436 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:16.436 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.694 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:16.694 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.694 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.694 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.694 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.951 22:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.886 22:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.820 00:14:18.820 22:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.820 22:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.820 22:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.079 22:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.079 22:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.079 22:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.079 22:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.079 22:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.079 22:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:19.079 { 00:14:19.079 "auth": { 00:14:19.079 "dhgroup": "ffdhe8192", 00:14:19.079 "digest": "sha384", 00:14:19.079 "state": "completed" 00:14:19.079 }, 00:14:19.079 "cntlid": 91, 00:14:19.079 "listen_address": { 00:14:19.079 "adrfam": "IPv4", 00:14:19.079 "traddr": "10.0.0.2", 00:14:19.079 "trsvcid": "4420", 00:14:19.079 "trtype": "TCP" 00:14:19.079 }, 00:14:19.079 "peer_address": { 00:14:19.079 "adrfam": "IPv4", 00:14:19.079 "traddr": "10.0.0.1", 00:14:19.079 "trsvcid": "58002", 00:14:19.079 "trtype": "TCP" 00:14:19.079 }, 00:14:19.079 "qid": 0, 00:14:19.079 "state": "enabled", 00:14:19.079 "thread": "nvmf_tgt_poll_group_000" 00:14:19.079 } 00:14:19.079 ]' 00:14:19.079 22:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.079 22:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.079 22:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.079 22:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:19.079 22:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.338 22:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.338 22:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.338 22:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.596 22:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.531 22:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.467 00:14:21.467 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:21.467 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:21.467 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.467 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.467 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.467 22:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.467 22:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.467 22:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.467 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.467 { 00:14:21.467 "auth": { 00:14:21.467 "dhgroup": "ffdhe8192", 00:14:21.467 "digest": "sha384", 00:14:21.467 "state": "completed" 00:14:21.467 }, 00:14:21.467 "cntlid": 93, 00:14:21.467 "listen_address": { 00:14:21.467 "adrfam": "IPv4", 00:14:21.467 "traddr": "10.0.0.2", 00:14:21.467 "trsvcid": "4420", 00:14:21.467 "trtype": "TCP" 00:14:21.467 }, 00:14:21.467 "peer_address": { 00:14:21.467 "adrfam": "IPv4", 00:14:21.467 "traddr": "10.0.0.1", 00:14:21.467 "trsvcid": "58028", 00:14:21.467 "trtype": "TCP" 00:14:21.467 }, 00:14:21.467 "qid": 0, 00:14:21.467 "state": "enabled", 00:14:21.467 "thread": "nvmf_tgt_poll_group_000" 00:14:21.467 } 00:14:21.467 ]' 00:14:21.467 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.725 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:21.725 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.725 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:21.725 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.725 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.725 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.725 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.983 22:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:14:22.917 22:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.917 22:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:22.917 22:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.917 22:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.917 22:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.917 22:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:22.917 22:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:22.917 22:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:23.175 22:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:14:23.175 22:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.175 22:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:23.175 22:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:23.175 22:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:23.175 22:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.175 22:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:14:23.175 22:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.175 22:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.175 22:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.175 22:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:23.175 22:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.108 00:14:24.108 22:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.108 22:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.108 22:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.108 22:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.108 22:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.108 22:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.108 22:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.108 22:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.108 22:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.108 { 00:14:24.108 "auth": { 00:14:24.108 "dhgroup": "ffdhe8192", 00:14:24.108 "digest": "sha384", 00:14:24.108 "state": "completed" 00:14:24.108 }, 00:14:24.108 "cntlid": 95, 00:14:24.108 "listen_address": { 00:14:24.108 "adrfam": "IPv4", 00:14:24.108 "traddr": "10.0.0.2", 00:14:24.108 "trsvcid": "4420", 00:14:24.108 "trtype": "TCP" 00:14:24.108 }, 00:14:24.108 "peer_address": { 00:14:24.108 "adrfam": "IPv4", 00:14:24.108 "traddr": "10.0.0.1", 00:14:24.108 "trsvcid": "57462", 00:14:24.108 "trtype": "TCP" 00:14:24.108 }, 00:14:24.108 "qid": 0, 00:14:24.108 "state": "enabled", 00:14:24.108 "thread": "nvmf_tgt_poll_group_000" 00:14:24.108 } 00:14:24.108 ]' 00:14:24.108 22:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.108 22:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:24.108 22:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.366 22:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:24.366 22:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.366 22:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.366 22:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.366 22:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.623 22:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:14:25.190 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.190 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:25.190 22:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.190 22:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.190 22:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.190 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:25.190 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:25.190 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.190 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:25.190 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:25.494 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:14:25.494 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:25.494 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:25.494 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:25.494 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:25.494 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.494 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.494 22:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.494 22:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.494 22:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.494 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.494 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.057 00:14:26.057 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.058 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.058 22:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.058 22:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.058 22:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.058 22:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.058 22:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.316 22:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.316 22:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.316 { 00:14:26.316 "auth": { 00:14:26.316 "dhgroup": "null", 00:14:26.316 "digest": "sha512", 00:14:26.316 "state": "completed" 00:14:26.316 }, 00:14:26.316 "cntlid": 97, 00:14:26.316 "listen_address": { 00:14:26.316 "adrfam": "IPv4", 00:14:26.316 "traddr": "10.0.0.2", 00:14:26.316 "trsvcid": "4420", 00:14:26.316 "trtype": "TCP" 00:14:26.316 }, 00:14:26.316 "peer_address": { 00:14:26.316 "adrfam": "IPv4", 00:14:26.316 "traddr": "10.0.0.1", 00:14:26.316 "trsvcid": "57482", 00:14:26.316 "trtype": "TCP" 00:14:26.316 }, 00:14:26.316 "qid": 0, 00:14:26.316 "state": "enabled", 00:14:26.316 "thread": "nvmf_tgt_poll_group_000" 00:14:26.316 } 00:14:26.316 ]' 00:14:26.316 22:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.316 22:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:26.316 22:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.316 22:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:26.316 22:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.316 22:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.316 22:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.316 22:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.573 22:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:14:27.503 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.503 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:27.503 22:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.503 22:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.503 22:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.503 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.503 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:27.503 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:27.763 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:14:27.764 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.764 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:27.764 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:27.764 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:27.764 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.764 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.764 22:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.764 22:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.764 22:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.764 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.764 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.027 00:14:28.027 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.027 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.027 22:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.610 22:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.610 22:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.610 22:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.610 22:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.610 22:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.610 22:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.610 { 00:14:28.610 "auth": { 00:14:28.610 "dhgroup": "null", 00:14:28.610 "digest": "sha512", 00:14:28.610 "state": "completed" 00:14:28.610 }, 00:14:28.610 "cntlid": 99, 00:14:28.610 "listen_address": { 00:14:28.610 "adrfam": "IPv4", 00:14:28.610 "traddr": "10.0.0.2", 00:14:28.610 "trsvcid": "4420", 00:14:28.610 "trtype": "TCP" 00:14:28.610 }, 00:14:28.610 "peer_address": { 00:14:28.610 "adrfam": "IPv4", 00:14:28.610 "traddr": "10.0.0.1", 00:14:28.610 "trsvcid": "57516", 00:14:28.610 "trtype": "TCP" 00:14:28.610 }, 00:14:28.610 "qid": 0, 00:14:28.610 "state": "enabled", 00:14:28.610 "thread": "nvmf_tgt_poll_group_000" 00:14:28.610 } 00:14:28.610 ]' 00:14:28.610 22:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.610 22:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:28.610 22:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.610 22:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:28.610 22:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.610 22:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.610 22:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.610 22:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.202 22:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:14:29.789 22:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.789 22:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:29.789 22:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.789 22:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.789 22:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.789 22:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.789 22:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:29.789 22:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:30.047 22:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:14:30.047 22:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.047 22:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:30.047 22:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:30.047 22:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:30.047 22:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.048 22:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.048 22:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.048 22:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.048 22:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.048 22:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.048 22:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.305 00:14:30.305 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.305 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.306 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.563 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.563 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.563 22:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.564 22:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.564 22:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.564 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.564 { 00:14:30.564 "auth": { 00:14:30.564 "dhgroup": "null", 00:14:30.564 "digest": "sha512", 00:14:30.564 "state": "completed" 00:14:30.564 }, 00:14:30.564 "cntlid": 101, 00:14:30.564 "listen_address": { 00:14:30.564 "adrfam": "IPv4", 00:14:30.564 "traddr": "10.0.0.2", 00:14:30.564 "trsvcid": "4420", 00:14:30.564 "trtype": "TCP" 00:14:30.564 }, 00:14:30.564 "peer_address": { 00:14:30.564 "adrfam": "IPv4", 00:14:30.564 "traddr": "10.0.0.1", 00:14:30.564 "trsvcid": "57542", 00:14:30.564 "trtype": "TCP" 00:14:30.564 }, 00:14:30.564 "qid": 0, 00:14:30.564 "state": "enabled", 00:14:30.564 "thread": "nvmf_tgt_poll_group_000" 00:14:30.564 } 00:14:30.564 ]' 00:14:30.564 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.564 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:30.564 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.821 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:30.821 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.821 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.821 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.821 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.148 22:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:14:31.716 22:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.716 22:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:31.716 22:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.716 22:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.716 22:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.716 22:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.716 22:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:31.716 22:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:31.974 22:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:14:31.974 22:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.974 22:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:31.974 22:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:31.974 22:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:31.974 22:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.974 22:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:14:31.974 22:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.974 22:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.974 22:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.974 22:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:31.974 22:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.232 00:14:32.232 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.232 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.232 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.490 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.490 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.490 22:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.490 22:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.490 22:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.490 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.490 { 00:14:32.490 "auth": { 00:14:32.490 "dhgroup": "null", 00:14:32.490 "digest": "sha512", 00:14:32.490 "state": "completed" 00:14:32.490 }, 00:14:32.490 "cntlid": 103, 00:14:32.490 "listen_address": { 00:14:32.490 "adrfam": "IPv4", 00:14:32.490 "traddr": "10.0.0.2", 00:14:32.490 "trsvcid": "4420", 00:14:32.490 "trtype": "TCP" 00:14:32.490 }, 00:14:32.490 "peer_address": { 00:14:32.490 "adrfam": "IPv4", 00:14:32.490 "traddr": "10.0.0.1", 00:14:32.490 "trsvcid": "57564", 00:14:32.490 "trtype": "TCP" 00:14:32.490 }, 00:14:32.490 "qid": 0, 00:14:32.490 "state": "enabled", 00:14:32.490 "thread": "nvmf_tgt_poll_group_000" 00:14:32.490 } 00:14:32.490 ]' 00:14:32.490 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.749 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:32.749 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.749 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:32.749 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.749 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.749 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.749 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.007 22:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:14:33.939 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.939 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:33.939 22:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.939 22:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.939 22:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.939 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:33.939 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.939 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:33.939 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:34.197 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:14:34.197 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.197 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:34.197 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:34.197 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:34.197 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.197 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.197 22:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.197 22:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.197 22:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.197 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.197 22:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.456 00:14:34.714 22:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.714 22:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.714 22:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.973 22:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.973 22:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.973 22:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.973 22:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.973 22:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.973 22:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.973 { 00:14:34.973 "auth": { 00:14:34.973 "dhgroup": "ffdhe2048", 00:14:34.973 "digest": "sha512", 00:14:34.973 "state": "completed" 00:14:34.973 }, 00:14:34.973 "cntlid": 105, 00:14:34.973 "listen_address": { 00:14:34.973 "adrfam": "IPv4", 00:14:34.973 "traddr": "10.0.0.2", 00:14:34.973 "trsvcid": "4420", 00:14:34.973 "trtype": "TCP" 00:14:34.973 }, 00:14:34.973 "peer_address": { 00:14:34.973 "adrfam": "IPv4", 00:14:34.973 "traddr": "10.0.0.1", 00:14:34.973 "trsvcid": "33032", 00:14:34.973 "trtype": "TCP" 00:14:34.973 }, 00:14:34.973 "qid": 0, 00:14:34.973 "state": "enabled", 00:14:34.973 "thread": "nvmf_tgt_poll_group_000" 00:14:34.973 } 00:14:34.973 ]' 00:14:34.973 22:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.973 22:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:34.973 22:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.973 22:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:34.973 22:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.973 22:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.973 22:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.973 22:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.240 22:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:14:36.172 22:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.172 22:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:36.172 22:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.172 22:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.172 22:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.172 22:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.172 22:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:36.172 22:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:36.430 22:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:14:36.430 22:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.430 22:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:36.430 22:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:36.430 22:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:36.430 22:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.430 22:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.430 22:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.430 22:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.430 22:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.430 22:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.430 22:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.995 00:14:36.995 22:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.995 22:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.995 22:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.253 22:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.253 22:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.253 22:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.253 22:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.253 22:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.253 22:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.253 { 00:14:37.253 "auth": { 00:14:37.253 "dhgroup": "ffdhe2048", 00:14:37.253 "digest": "sha512", 00:14:37.253 "state": "completed" 00:14:37.253 }, 00:14:37.253 "cntlid": 107, 00:14:37.253 "listen_address": { 00:14:37.253 "adrfam": "IPv4", 00:14:37.253 "traddr": "10.0.0.2", 00:14:37.253 "trsvcid": "4420", 00:14:37.253 "trtype": "TCP" 00:14:37.253 }, 00:14:37.253 "peer_address": { 00:14:37.253 "adrfam": "IPv4", 00:14:37.253 "traddr": "10.0.0.1", 00:14:37.253 "trsvcid": "33050", 00:14:37.253 "trtype": "TCP" 00:14:37.253 }, 00:14:37.253 "qid": 0, 00:14:37.253 "state": "enabled", 00:14:37.253 "thread": "nvmf_tgt_poll_group_000" 00:14:37.253 } 00:14:37.253 ]' 00:14:37.253 22:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.253 22:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:37.253 22:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.253 22:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:37.511 22:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.511 22:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.511 22:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.511 22:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.769 22:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:14:38.701 22:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.701 22:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:38.701 22:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.701 22:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.701 22:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.701 22:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.701 22:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:38.701 22:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:38.960 22:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:14:38.960 22:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.960 22:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:38.960 22:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:38.960 22:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:38.960 22:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.960 22:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.960 22:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.960 22:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.960 22:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.960 22:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.960 22:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.530 00:14:39.530 22:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.530 22:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.530 22:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.789 22:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.789 22:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.789 22:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.789 22:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.789 22:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.789 22:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.789 { 00:14:39.789 "auth": { 00:14:39.789 "dhgroup": "ffdhe2048", 00:14:39.789 "digest": "sha512", 00:14:39.789 "state": "completed" 00:14:39.789 }, 00:14:39.789 "cntlid": 109, 00:14:39.789 "listen_address": { 00:14:39.789 "adrfam": "IPv4", 00:14:39.789 "traddr": "10.0.0.2", 00:14:39.789 "trsvcid": "4420", 00:14:39.789 "trtype": "TCP" 00:14:39.789 }, 00:14:39.789 "peer_address": { 00:14:39.789 "adrfam": "IPv4", 00:14:39.789 "traddr": "10.0.0.1", 00:14:39.789 "trsvcid": "33068", 00:14:39.790 "trtype": "TCP" 00:14:39.790 }, 00:14:39.790 "qid": 0, 00:14:39.790 "state": "enabled", 00:14:39.790 "thread": "nvmf_tgt_poll_group_000" 00:14:39.790 } 00:14:39.790 ]' 00:14:39.790 22:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.790 22:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:39.790 22:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.790 22:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:39.790 22:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.048 22:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.048 22:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.048 22:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.308 22:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:14:40.874 22:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.874 22:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:40.874 22:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.874 22:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.874 22:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.874 22:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.874 22:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:40.874 22:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:41.133 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:14:41.133 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.133 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:41.133 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:41.133 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:41.133 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.133 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:14:41.133 22:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.133 22:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.133 22:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.133 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:41.133 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:41.700 00:14:41.700 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.700 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.700 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.958 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.958 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.958 22:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.958 22:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.958 22:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.958 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.958 { 00:14:41.958 "auth": { 00:14:41.958 "dhgroup": "ffdhe2048", 00:14:41.958 "digest": "sha512", 00:14:41.958 "state": "completed" 00:14:41.958 }, 00:14:41.958 "cntlid": 111, 00:14:41.958 "listen_address": { 00:14:41.958 "adrfam": "IPv4", 00:14:41.958 "traddr": "10.0.0.2", 00:14:41.958 "trsvcid": "4420", 00:14:41.958 "trtype": "TCP" 00:14:41.958 }, 00:14:41.958 "peer_address": { 00:14:41.958 "adrfam": "IPv4", 00:14:41.958 "traddr": "10.0.0.1", 00:14:41.958 "trsvcid": "33094", 00:14:41.958 "trtype": "TCP" 00:14:41.958 }, 00:14:41.958 "qid": 0, 00:14:41.958 "state": "enabled", 00:14:41.958 "thread": "nvmf_tgt_poll_group_000" 00:14:41.958 } 00:14:41.958 ]' 00:14:41.958 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.958 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:41.958 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.958 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:41.958 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.958 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.958 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.958 22:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.217 22:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:14:43.149 22:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.149 22:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:43.149 22:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.149 22:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.149 22:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.149 22:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:43.149 22:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.149 22:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:43.149 22:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:43.407 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:14:43.407 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.407 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:43.407 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:43.407 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:43.407 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.407 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.407 22:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.407 22:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.407 22:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.407 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.407 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.665 00:14:43.665 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.665 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.665 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.231 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.231 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.231 22:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.231 22:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.231 22:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.231 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.231 { 00:14:44.231 "auth": { 00:14:44.231 "dhgroup": "ffdhe3072", 00:14:44.231 "digest": "sha512", 00:14:44.231 "state": "completed" 00:14:44.231 }, 00:14:44.231 "cntlid": 113, 00:14:44.231 "listen_address": { 00:14:44.231 "adrfam": "IPv4", 00:14:44.231 "traddr": "10.0.0.2", 00:14:44.231 "trsvcid": "4420", 00:14:44.231 "trtype": "TCP" 00:14:44.231 }, 00:14:44.231 "peer_address": { 00:14:44.231 "adrfam": "IPv4", 00:14:44.231 "traddr": "10.0.0.1", 00:14:44.231 "trsvcid": "46900", 00:14:44.231 "trtype": "TCP" 00:14:44.231 }, 00:14:44.231 "qid": 0, 00:14:44.231 "state": "enabled", 00:14:44.231 "thread": "nvmf_tgt_poll_group_000" 00:14:44.231 } 00:14:44.231 ]' 00:14:44.231 22:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.231 22:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.231 22:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.231 22:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:44.231 22:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.231 22:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.231 22:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.231 22:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.797 22:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:14:45.387 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.388 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:45.388 22:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.388 22:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.388 22:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.388 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.388 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:45.388 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:45.646 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:14:45.646 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.646 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:45.646 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:45.646 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:45.646 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.646 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.646 22:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.646 22:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.646 22:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.646 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.646 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.903 00:14:45.903 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.903 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.903 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.160 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.160 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.160 22:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.160 22:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.160 22:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.160 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.160 { 00:14:46.160 "auth": { 00:14:46.160 "dhgroup": "ffdhe3072", 00:14:46.160 "digest": "sha512", 00:14:46.160 "state": "completed" 00:14:46.160 }, 00:14:46.160 "cntlid": 115, 00:14:46.160 "listen_address": { 00:14:46.160 "adrfam": "IPv4", 00:14:46.160 "traddr": "10.0.0.2", 00:14:46.160 "trsvcid": "4420", 00:14:46.160 "trtype": "TCP" 00:14:46.160 }, 00:14:46.160 "peer_address": { 00:14:46.160 "adrfam": "IPv4", 00:14:46.160 "traddr": "10.0.0.1", 00:14:46.160 "trsvcid": "46934", 00:14:46.160 "trtype": "TCP" 00:14:46.160 }, 00:14:46.160 "qid": 0, 00:14:46.160 "state": "enabled", 00:14:46.160 "thread": "nvmf_tgt_poll_group_000" 00:14:46.160 } 00:14:46.160 ]' 00:14:46.160 22:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.160 22:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.160 22:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.418 22:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:46.418 22:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.418 22:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.418 22:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.418 22:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.676 22:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.610 22:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.611 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.611 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.177 00:14:48.177 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.177 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.177 22:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.435 22:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.435 22:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.435 22:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.435 22:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.435 22:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.435 22:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.435 { 00:14:48.435 "auth": { 00:14:48.435 "dhgroup": "ffdhe3072", 00:14:48.435 "digest": "sha512", 00:14:48.435 "state": "completed" 00:14:48.435 }, 00:14:48.435 "cntlid": 117, 00:14:48.435 "listen_address": { 00:14:48.435 "adrfam": "IPv4", 00:14:48.435 "traddr": "10.0.0.2", 00:14:48.435 "trsvcid": "4420", 00:14:48.435 "trtype": "TCP" 00:14:48.435 }, 00:14:48.435 "peer_address": { 00:14:48.435 "adrfam": "IPv4", 00:14:48.435 "traddr": "10.0.0.1", 00:14:48.435 "trsvcid": "46956", 00:14:48.435 "trtype": "TCP" 00:14:48.435 }, 00:14:48.435 "qid": 0, 00:14:48.435 "state": "enabled", 00:14:48.435 "thread": "nvmf_tgt_poll_group_000" 00:14:48.435 } 00:14:48.435 ]' 00:14:48.435 22:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.435 22:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.435 22:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.435 22:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:48.435 22:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.692 22:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.692 22:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.692 22:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.692 22:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:14:49.627 22:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.627 22:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:49.627 22:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.628 22:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.628 22:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.628 22:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.628 22:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:49.628 22:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:49.886 22:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:14:49.886 22:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.886 22:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:49.886 22:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:49.886 22:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:49.886 22:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.886 22:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:14:49.886 22:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.886 22:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.886 22:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.886 22:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.886 22:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:50.144 00:14:50.144 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.144 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.144 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.711 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.711 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.711 22:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.711 22:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.711 22:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.711 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.711 { 00:14:50.711 "auth": { 00:14:50.711 "dhgroup": "ffdhe3072", 00:14:50.711 "digest": "sha512", 00:14:50.711 "state": "completed" 00:14:50.711 }, 00:14:50.711 "cntlid": 119, 00:14:50.711 "listen_address": { 00:14:50.711 "adrfam": "IPv4", 00:14:50.711 "traddr": "10.0.0.2", 00:14:50.711 "trsvcid": "4420", 00:14:50.711 "trtype": "TCP" 00:14:50.711 }, 00:14:50.711 "peer_address": { 00:14:50.711 "adrfam": "IPv4", 00:14:50.711 "traddr": "10.0.0.1", 00:14:50.711 "trsvcid": "46996", 00:14:50.711 "trtype": "TCP" 00:14:50.711 }, 00:14:50.711 "qid": 0, 00:14:50.711 "state": "enabled", 00:14:50.711 "thread": "nvmf_tgt_poll_group_000" 00:14:50.711 } 00:14:50.711 ]' 00:14:50.711 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.711 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.711 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.711 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:50.711 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.711 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.711 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.711 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.969 22:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:14:51.904 22:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.904 22:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:51.904 22:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.904 22:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.904 22:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.904 22:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.904 22:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.904 22:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:51.904 22:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:52.162 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:14:52.162 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.162 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:52.162 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:52.162 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:52.162 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.162 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.162 22:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.162 22:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.162 22:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.162 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.162 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.729 00:14:52.729 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.729 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.729 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.986 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.986 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.986 22:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.986 22:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.986 22:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.986 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.986 { 00:14:52.986 "auth": { 00:14:52.986 "dhgroup": "ffdhe4096", 00:14:52.986 "digest": "sha512", 00:14:52.986 "state": "completed" 00:14:52.986 }, 00:14:52.986 "cntlid": 121, 00:14:52.986 "listen_address": { 00:14:52.986 "adrfam": "IPv4", 00:14:52.986 "traddr": "10.0.0.2", 00:14:52.986 "trsvcid": "4420", 00:14:52.986 "trtype": "TCP" 00:14:52.986 }, 00:14:52.986 "peer_address": { 00:14:52.986 "adrfam": "IPv4", 00:14:52.986 "traddr": "10.0.0.1", 00:14:52.986 "trsvcid": "47022", 00:14:52.986 "trtype": "TCP" 00:14:52.986 }, 00:14:52.986 "qid": 0, 00:14:52.986 "state": "enabled", 00:14:52.986 "thread": "nvmf_tgt_poll_group_000" 00:14:52.986 } 00:14:52.986 ]' 00:14:52.986 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.986 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.987 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.987 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:52.987 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.244 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.244 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.244 22:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.502 22:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:14:54.068 22:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.068 22:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:54.068 22:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.068 22:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.068 22:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.068 22:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.068 22:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:54.068 22:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:54.326 22:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:14:54.326 22:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.326 22:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:54.326 22:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:54.326 22:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:54.326 22:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.326 22:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.326 22:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.326 22:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.326 22:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.326 22:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.326 22:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.892 00:14:54.892 22:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.892 22:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.892 22:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.150 22:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.150 22:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.150 22:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.150 22:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.150 22:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.150 22:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.150 { 00:14:55.150 "auth": { 00:14:55.150 "dhgroup": "ffdhe4096", 00:14:55.150 "digest": "sha512", 00:14:55.150 "state": "completed" 00:14:55.150 }, 00:14:55.150 "cntlid": 123, 00:14:55.150 "listen_address": { 00:14:55.150 "adrfam": "IPv4", 00:14:55.150 "traddr": "10.0.0.2", 00:14:55.150 "trsvcid": "4420", 00:14:55.150 "trtype": "TCP" 00:14:55.151 }, 00:14:55.151 "peer_address": { 00:14:55.151 "adrfam": "IPv4", 00:14:55.151 "traddr": "10.0.0.1", 00:14:55.151 "trsvcid": "52868", 00:14:55.151 "trtype": "TCP" 00:14:55.151 }, 00:14:55.151 "qid": 0, 00:14:55.151 "state": "enabled", 00:14:55.151 "thread": "nvmf_tgt_poll_group_000" 00:14:55.151 } 00:14:55.151 ]' 00:14:55.151 22:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.408 22:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:55.408 22:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.408 22:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:55.408 22:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.408 22:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.408 22:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.409 22:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.666 22:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:14:56.619 22:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.619 22:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:56.619 22:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.619 22:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.619 22:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.619 22:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.619 22:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:56.619 22:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:56.876 22:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:14:56.876 22:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.876 22:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:56.876 22:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:56.876 22:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:56.876 22:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.876 22:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.876 22:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.876 22:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.876 22:09:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.876 22:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.876 22:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.441 00:14:57.441 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.441 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.441 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.699 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.700 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.700 22:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.700 22:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.700 22:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.700 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.700 { 00:14:57.700 "auth": { 00:14:57.700 "dhgroup": "ffdhe4096", 00:14:57.700 "digest": "sha512", 00:14:57.700 "state": "completed" 00:14:57.700 }, 00:14:57.700 "cntlid": 125, 00:14:57.700 "listen_address": { 00:14:57.700 "adrfam": "IPv4", 00:14:57.700 "traddr": "10.0.0.2", 00:14:57.700 "trsvcid": "4420", 00:14:57.700 "trtype": "TCP" 00:14:57.700 }, 00:14:57.700 "peer_address": { 00:14:57.700 "adrfam": "IPv4", 00:14:57.700 "traddr": "10.0.0.1", 00:14:57.700 "trsvcid": "52884", 00:14:57.700 "trtype": "TCP" 00:14:57.700 }, 00:14:57.700 "qid": 0, 00:14:57.700 "state": "enabled", 00:14:57.700 "thread": "nvmf_tgt_poll_group_000" 00:14:57.700 } 00:14:57.700 ]' 00:14:57.700 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.700 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:57.700 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:57.700 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:57.700 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:57.957 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.957 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.957 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.213 22:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.147 22:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:14:59.147 22:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.147 22:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.147 22:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.147 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:59.147 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:59.406 00:14:59.664 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.664 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.664 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.921 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.921 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.921 22:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.921 22:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.921 22:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.921 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.921 { 00:14:59.921 "auth": { 00:14:59.921 "dhgroup": "ffdhe4096", 00:14:59.921 "digest": "sha512", 00:14:59.921 "state": "completed" 00:14:59.921 }, 00:14:59.921 "cntlid": 127, 00:14:59.921 "listen_address": { 00:14:59.921 "adrfam": "IPv4", 00:14:59.921 "traddr": "10.0.0.2", 00:14:59.921 "trsvcid": "4420", 00:14:59.921 "trtype": "TCP" 00:14:59.921 }, 00:14:59.921 "peer_address": { 00:14:59.921 "adrfam": "IPv4", 00:14:59.921 "traddr": "10.0.0.1", 00:14:59.922 "trsvcid": "52898", 00:14:59.922 "trtype": "TCP" 00:14:59.922 }, 00:14:59.922 "qid": 0, 00:14:59.922 "state": "enabled", 00:14:59.922 "thread": "nvmf_tgt_poll_group_000" 00:14:59.922 } 00:14:59.922 ]' 00:14:59.922 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.922 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:59.922 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.922 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:59.922 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.922 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.922 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.922 22:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.180 22:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:15:01.113 22:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.113 22:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:01.113 22:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.113 22:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.113 22:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.113 22:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.113 22:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.113 22:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:01.113 22:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:01.113 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:15:01.113 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.113 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:01.113 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:01.113 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:01.113 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.114 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.114 22:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.114 22:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.114 22:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.114 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.114 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.689 00:15:01.689 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.689 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.689 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.963 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.963 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.963 22:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.963 22:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.963 22:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.963 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.963 { 00:15:01.963 "auth": { 00:15:01.963 "dhgroup": "ffdhe6144", 00:15:01.963 "digest": "sha512", 00:15:01.963 "state": "completed" 00:15:01.963 }, 00:15:01.963 "cntlid": 129, 00:15:01.963 "listen_address": { 00:15:01.963 "adrfam": "IPv4", 00:15:01.963 "traddr": "10.0.0.2", 00:15:01.963 "trsvcid": "4420", 00:15:01.963 "trtype": "TCP" 00:15:01.963 }, 00:15:01.963 "peer_address": { 00:15:01.963 "adrfam": "IPv4", 00:15:01.963 "traddr": "10.0.0.1", 00:15:01.963 "trsvcid": "52930", 00:15:01.963 "trtype": "TCP" 00:15:01.963 }, 00:15:01.963 "qid": 0, 00:15:01.963 "state": "enabled", 00:15:01.963 "thread": "nvmf_tgt_poll_group_000" 00:15:01.963 } 00:15:01.963 ]' 00:15:01.963 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.221 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:02.221 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.221 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:02.221 22:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.221 22:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.221 22:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.221 22:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.480 22:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:15:03.046 22:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.046 22:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:03.046 22:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.046 22:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.304 22:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.304 22:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.304 22:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:03.304 22:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:03.560 22:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:15:03.560 22:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.560 22:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:03.560 22:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:03.560 22:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:03.560 22:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.560 22:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.560 22:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.560 22:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.560 22:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.560 22:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.560 22:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.818 00:15:04.076 22:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.076 22:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.076 22:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.334 22:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.334 22:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.334 22:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.334 22:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.334 22:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.334 22:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.334 { 00:15:04.334 "auth": { 00:15:04.334 "dhgroup": "ffdhe6144", 00:15:04.334 "digest": "sha512", 00:15:04.334 "state": "completed" 00:15:04.334 }, 00:15:04.334 "cntlid": 131, 00:15:04.334 "listen_address": { 00:15:04.334 "adrfam": "IPv4", 00:15:04.334 "traddr": "10.0.0.2", 00:15:04.334 "trsvcid": "4420", 00:15:04.334 "trtype": "TCP" 00:15:04.334 }, 00:15:04.334 "peer_address": { 00:15:04.334 "adrfam": "IPv4", 00:15:04.334 "traddr": "10.0.0.1", 00:15:04.334 "trsvcid": "38920", 00:15:04.334 "trtype": "TCP" 00:15:04.334 }, 00:15:04.334 "qid": 0, 00:15:04.334 "state": "enabled", 00:15:04.334 "thread": "nvmf_tgt_poll_group_000" 00:15:04.334 } 00:15:04.334 ]' 00:15:04.334 22:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.334 22:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:04.334 22:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.334 22:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:04.334 22:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.334 22:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.334 22:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.334 22:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.591 22:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:15:05.525 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.525 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:05.525 22:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.525 22:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.525 22:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.525 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.525 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:05.525 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:05.782 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:15:05.782 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.782 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:05.782 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:05.782 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:05.782 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.782 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.782 22:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.782 22:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.782 22:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.782 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.782 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.039 00:15:06.296 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.296 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.296 22:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.553 22:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.553 22:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.553 22:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.553 22:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.553 22:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.553 22:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.553 { 00:15:06.553 "auth": { 00:15:06.553 "dhgroup": "ffdhe6144", 00:15:06.553 "digest": "sha512", 00:15:06.553 "state": "completed" 00:15:06.553 }, 00:15:06.553 "cntlid": 133, 00:15:06.553 "listen_address": { 00:15:06.553 "adrfam": "IPv4", 00:15:06.553 "traddr": "10.0.0.2", 00:15:06.553 "trsvcid": "4420", 00:15:06.553 "trtype": "TCP" 00:15:06.553 }, 00:15:06.553 "peer_address": { 00:15:06.553 "adrfam": "IPv4", 00:15:06.553 "traddr": "10.0.0.1", 00:15:06.553 "trsvcid": "38946", 00:15:06.553 "trtype": "TCP" 00:15:06.553 }, 00:15:06.553 "qid": 0, 00:15:06.553 "state": "enabled", 00:15:06.553 "thread": "nvmf_tgt_poll_group_000" 00:15:06.553 } 00:15:06.553 ]' 00:15:06.553 22:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.553 22:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:06.553 22:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.553 22:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:06.553 22:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.811 22:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.811 22:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.812 22:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.070 22:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.001 22:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.566 00:15:08.567 22:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.567 22:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.567 22:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.824 22:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.824 22:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.824 22:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.824 22:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.824 22:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.824 22:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.824 { 00:15:08.824 "auth": { 00:15:08.824 "dhgroup": "ffdhe6144", 00:15:08.824 "digest": "sha512", 00:15:08.824 "state": "completed" 00:15:08.824 }, 00:15:08.824 "cntlid": 135, 00:15:08.824 "listen_address": { 00:15:08.824 "adrfam": "IPv4", 00:15:08.824 "traddr": "10.0.0.2", 00:15:08.824 "trsvcid": "4420", 00:15:08.824 "trtype": "TCP" 00:15:08.824 }, 00:15:08.824 "peer_address": { 00:15:08.824 "adrfam": "IPv4", 00:15:08.824 "traddr": "10.0.0.1", 00:15:08.824 "trsvcid": "38960", 00:15:08.824 "trtype": "TCP" 00:15:08.824 }, 00:15:08.824 "qid": 0, 00:15:08.824 "state": "enabled", 00:15:08.824 "thread": "nvmf_tgt_poll_group_000" 00:15:08.824 } 00:15:08.824 ]' 00:15:08.824 22:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.824 22:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:08.824 22:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.082 22:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:09.082 22:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.082 22:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.082 22:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.082 22:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.376 22:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:15:09.941 22:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.941 22:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:09.941 22:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.941 22:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.941 22:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.941 22:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:09.941 22:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.941 22:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:09.941 22:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:10.200 22:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:15:10.200 22:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.200 22:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:10.200 22:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:10.200 22:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:10.200 22:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.200 22:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.200 22:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.200 22:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.200 22:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.200 22:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.200 22:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.135 00:15:11.135 22:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.135 22:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.135 22:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.394 22:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.394 22:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.394 22:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.394 22:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.394 22:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.394 22:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.394 { 00:15:11.394 "auth": { 00:15:11.394 "dhgroup": "ffdhe8192", 00:15:11.394 "digest": "sha512", 00:15:11.394 "state": "completed" 00:15:11.394 }, 00:15:11.394 "cntlid": 137, 00:15:11.394 "listen_address": { 00:15:11.394 "adrfam": "IPv4", 00:15:11.394 "traddr": "10.0.0.2", 00:15:11.394 "trsvcid": "4420", 00:15:11.394 "trtype": "TCP" 00:15:11.394 }, 00:15:11.394 "peer_address": { 00:15:11.394 "adrfam": "IPv4", 00:15:11.394 "traddr": "10.0.0.1", 00:15:11.394 "trsvcid": "38986", 00:15:11.394 "trtype": "TCP" 00:15:11.394 }, 00:15:11.394 "qid": 0, 00:15:11.394 "state": "enabled", 00:15:11.394 "thread": "nvmf_tgt_poll_group_000" 00:15:11.394 } 00:15:11.394 ]' 00:15:11.394 22:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.394 22:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:11.394 22:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.394 22:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:11.394 22:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.394 22:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.394 22:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.394 22:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.961 22:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:15:12.527 22:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.527 22:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:12.527 22:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.528 22:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.528 22:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.528 22:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.528 22:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:12.528 22:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:12.786 22:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:15:12.786 22:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.786 22:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:12.786 22:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:12.786 22:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:12.786 22:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.786 22:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.786 22:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.786 22:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.786 22:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.786 22:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.786 22:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.720 00:15:13.720 22:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.720 22:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.720 22:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.976 22:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.976 22:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.976 22:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.976 22:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.976 22:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.976 22:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.976 { 00:15:13.976 "auth": { 00:15:13.976 "dhgroup": "ffdhe8192", 00:15:13.976 "digest": "sha512", 00:15:13.976 "state": "completed" 00:15:13.976 }, 00:15:13.976 "cntlid": 139, 00:15:13.976 "listen_address": { 00:15:13.976 "adrfam": "IPv4", 00:15:13.976 "traddr": "10.0.0.2", 00:15:13.976 "trsvcid": "4420", 00:15:13.976 "trtype": "TCP" 00:15:13.976 }, 00:15:13.976 "peer_address": { 00:15:13.976 "adrfam": "IPv4", 00:15:13.976 "traddr": "10.0.0.1", 00:15:13.976 "trsvcid": "39014", 00:15:13.976 "trtype": "TCP" 00:15:13.976 }, 00:15:13.976 "qid": 0, 00:15:13.976 "state": "enabled", 00:15:13.976 "thread": "nvmf_tgt_poll_group_000" 00:15:13.976 } 00:15:13.976 ]' 00:15:13.976 22:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.976 22:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:13.976 22:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.976 22:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:13.976 22:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.976 22:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.976 22:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.976 22:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.233 22:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:01:NzEzZWNiOGQ4MTIwMmUyYmRmMWE0YzBlMzFkYmUwMjlUzfux: --dhchap-ctrl-secret DHHC-1:02:NGM2MThjYzZlZmUxNWVhYTk5YjAyNDY5M2RhMDM3OTllY2YxMTZiMTliOWJjMjdmgAKDwg==: 00:15:15.163 22:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.163 22:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:15.163 22:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.163 22:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.163 22:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.163 22:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.163 22:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.163 22:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.437 22:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:15:15.437 22:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.437 22:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:15.437 22:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:15.437 22:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:15.437 22:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.437 22:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.437 22:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.437 22:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.437 22:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.437 22:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.437 22:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.002 00:15:16.002 22:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.002 22:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.002 22:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.260 22:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.260 22:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.260 22:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.260 22:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.260 22:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.260 22:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.260 { 00:15:16.260 "auth": { 00:15:16.260 "dhgroup": "ffdhe8192", 00:15:16.260 "digest": "sha512", 00:15:16.260 "state": "completed" 00:15:16.260 }, 00:15:16.260 "cntlid": 141, 00:15:16.260 "listen_address": { 00:15:16.260 "adrfam": "IPv4", 00:15:16.260 "traddr": "10.0.0.2", 00:15:16.260 "trsvcid": "4420", 00:15:16.260 "trtype": "TCP" 00:15:16.260 }, 00:15:16.260 "peer_address": { 00:15:16.260 "adrfam": "IPv4", 00:15:16.260 "traddr": "10.0.0.1", 00:15:16.260 "trsvcid": "55662", 00:15:16.260 "trtype": "TCP" 00:15:16.260 }, 00:15:16.260 "qid": 0, 00:15:16.260 "state": "enabled", 00:15:16.260 "thread": "nvmf_tgt_poll_group_000" 00:15:16.260 } 00:15:16.260 ]' 00:15:16.260 22:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.260 22:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.260 22:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.516 22:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:16.516 22:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.516 22:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.516 22:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.516 22:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.772 22:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:02:ZTBmZmExZmJmYjIzZjA0YzU0NDk1ODNhZDM3MGE1NjU5MDI4ZmJhMDg3MDIwMGJl4QARmQ==: --dhchap-ctrl-secret DHHC-1:01:MTg5MDEwNGY1OTk2YTcwZDRjNDA5ZjJjMGUxNmI5ZTNKyxZ8: 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:17.705 22:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:18.638 00:15:18.638 22:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.638 22:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.638 22:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.897 22:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.897 22:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.897 22:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.897 22:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.897 22:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.897 22:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.897 { 00:15:18.897 "auth": { 00:15:18.897 "dhgroup": "ffdhe8192", 00:15:18.897 "digest": "sha512", 00:15:18.897 "state": "completed" 00:15:18.897 }, 00:15:18.897 "cntlid": 143, 00:15:18.897 "listen_address": { 00:15:18.897 "adrfam": "IPv4", 00:15:18.897 "traddr": "10.0.0.2", 00:15:18.897 "trsvcid": "4420", 00:15:18.897 "trtype": "TCP" 00:15:18.897 }, 00:15:18.897 "peer_address": { 00:15:18.897 "adrfam": "IPv4", 00:15:18.897 "traddr": "10.0.0.1", 00:15:18.897 "trsvcid": "55684", 00:15:18.897 "trtype": "TCP" 00:15:18.897 }, 00:15:18.897 "qid": 0, 00:15:18.897 "state": "enabled", 00:15:18.897 "thread": "nvmf_tgt_poll_group_000" 00:15:18.897 } 00:15:18.897 ]' 00:15:18.897 22:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.897 22:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:18.897 22:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.897 22:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:18.897 22:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.897 22:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.897 22:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.897 22:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.155 22:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:15:20.087 22:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.087 22:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:20.087 22:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.087 22:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.087 22:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.087 22:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:20.087 22:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:15:20.087 22:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:20.087 22:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:20.087 22:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:20.087 22:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:20.346 22:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:15:20.346 22:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.346 22:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:20.346 22:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:20.346 22:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:20.346 22:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.346 22:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.346 22:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.346 22:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.346 22:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.346 22:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.346 22:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.915 00:15:20.916 22:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.916 22:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.916 22:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.174 22:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.174 22:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.174 22:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.174 22:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.174 22:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.174 22:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.174 { 00:15:21.174 "auth": { 00:15:21.174 "dhgroup": "ffdhe8192", 00:15:21.174 "digest": "sha512", 00:15:21.174 "state": "completed" 00:15:21.174 }, 00:15:21.174 "cntlid": 145, 00:15:21.174 "listen_address": { 00:15:21.174 "adrfam": "IPv4", 00:15:21.174 "traddr": "10.0.0.2", 00:15:21.174 "trsvcid": "4420", 00:15:21.174 "trtype": "TCP" 00:15:21.174 }, 00:15:21.174 "peer_address": { 00:15:21.174 "adrfam": "IPv4", 00:15:21.174 "traddr": "10.0.0.1", 00:15:21.174 "trsvcid": "55702", 00:15:21.174 "trtype": "TCP" 00:15:21.174 }, 00:15:21.174 "qid": 0, 00:15:21.174 "state": "enabled", 00:15:21.174 "thread": "nvmf_tgt_poll_group_000" 00:15:21.174 } 00:15:21.174 ]' 00:15:21.174 22:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.432 22:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:21.432 22:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.432 22:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:21.432 22:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.432 22:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.432 22:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.432 22:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.690 22:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:00:ZDIyOWFkZTk1NjQwMjUyN2NlN2I0YTQ0NTc0NGQ3MjdkZTA3NTEwMDMyZmJlOGE0ilNnnQ==: --dhchap-ctrl-secret DHHC-1:03:Y2Q5YzYyZjVhZDRiZjdhZmUzYmFlZmExY2QzNDg2MGM4NzRjNzk5Y2Q5MDgxZDcwNmQyMDgyZDZmM2I1ODJmYmMKI9Q=: 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:22.625 22:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:23.191 2024/07/15 22:10:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:23.191 request: 00:15:23.191 { 00:15:23.191 "method": "bdev_nvme_attach_controller", 00:15:23.191 "params": { 00:15:23.191 "name": "nvme0", 00:15:23.191 "trtype": "tcp", 00:15:23.191 "traddr": "10.0.0.2", 00:15:23.191 "adrfam": "ipv4", 00:15:23.191 "trsvcid": "4420", 00:15:23.191 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:23.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29", 00:15:23.191 "prchk_reftag": false, 00:15:23.191 "prchk_guard": false, 00:15:23.191 "hdgst": false, 00:15:23.191 "ddgst": false, 00:15:23.191 "dhchap_key": "key2" 00:15:23.191 } 00:15:23.191 } 00:15:23.191 Got JSON-RPC error response 00:15:23.191 GoRPCClient: error on JSON-RPC call 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:23.191 22:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:23.755 2024/07/15 22:10:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:23.755 request: 00:15:23.755 { 00:15:23.755 "method": "bdev_nvme_attach_controller", 00:15:23.755 "params": { 00:15:23.755 "name": "nvme0", 00:15:23.755 "trtype": "tcp", 00:15:23.755 "traddr": "10.0.0.2", 00:15:23.755 "adrfam": "ipv4", 00:15:23.755 "trsvcid": "4420", 00:15:23.755 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:23.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29", 00:15:23.755 "prchk_reftag": false, 00:15:23.755 "prchk_guard": false, 00:15:23.755 "hdgst": false, 00:15:23.755 "ddgst": false, 00:15:23.755 "dhchap_key": "key1", 00:15:23.755 "dhchap_ctrlr_key": "ckey2" 00:15:23.755 } 00:15:23.755 } 00:15:23.755 Got JSON-RPC error response 00:15:23.755 GoRPCClient: error on JSON-RPC call 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key1 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.755 22:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.374 2024/07/15 22:10:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:24.374 request: 00:15:24.374 { 00:15:24.374 "method": "bdev_nvme_attach_controller", 00:15:24.374 "params": { 00:15:24.374 "name": "nvme0", 00:15:24.374 "trtype": "tcp", 00:15:24.374 "traddr": "10.0.0.2", 00:15:24.374 "adrfam": "ipv4", 00:15:24.374 "trsvcid": "4420", 00:15:24.374 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:24.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29", 00:15:24.374 "prchk_reftag": false, 00:15:24.374 "prchk_guard": false, 00:15:24.374 "hdgst": false, 00:15:24.374 "ddgst": false, 00:15:24.374 "dhchap_key": "key1", 00:15:24.374 "dhchap_ctrlr_key": "ckey1" 00:15:24.374 } 00:15:24.374 } 00:15:24.374 Got JSON-RPC error response 00:15:24.374 GoRPCClient: error on JSON-RPC call 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 77867 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77867 ']' 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77867 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77867 00:15:24.374 killing process with pid 77867 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77867' 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77867 00:15:24.374 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77867 00:15:24.631 22:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:24.631 22:10:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:24.631 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:24.631 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.631 22:10:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82896 00:15:24.631 22:10:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:24.631 22:10:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82896 00:15:24.631 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82896 ']' 00:15:24.631 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.631 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:24.631 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.631 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:24.631 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.889 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:24.889 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:24.889 22:10:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:24.889 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:24.889 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.889 22:10:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.889 22:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:24.889 22:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82896 00:15:24.889 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82896 ']' 00:15:24.889 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.889 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:24.889 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.889 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:24.889 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.146 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.146 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:25.146 22:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:15:25.146 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.146 22:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.146 22:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.146 22:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:15:25.146 22:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.146 22:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:25.146 22:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:25.146 22:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:25.146 22:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.146 22:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:15:25.146 22:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.146 22:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.403 22:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.403 22:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.403 22:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.969 00:15:25.969 22:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.969 22:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.969 22:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.226 22:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.226 22:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.226 22:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.226 22:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.226 22:10:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.226 22:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.226 { 00:15:26.226 "auth": { 00:15:26.226 "dhgroup": "ffdhe8192", 00:15:26.226 "digest": "sha512", 00:15:26.227 "state": "completed" 00:15:26.227 }, 00:15:26.227 "cntlid": 1, 00:15:26.227 "listen_address": { 00:15:26.227 "adrfam": "IPv4", 00:15:26.227 "traddr": "10.0.0.2", 00:15:26.227 "trsvcid": "4420", 00:15:26.227 "trtype": "TCP" 00:15:26.227 }, 00:15:26.227 "peer_address": { 00:15:26.227 "adrfam": "IPv4", 00:15:26.227 "traddr": "10.0.0.1", 00:15:26.227 "trsvcid": "54472", 00:15:26.227 "trtype": "TCP" 00:15:26.227 }, 00:15:26.227 "qid": 0, 00:15:26.227 "state": "enabled", 00:15:26.227 "thread": "nvmf_tgt_poll_group_000" 00:15:26.227 } 00:15:26.227 ]' 00:15:26.227 22:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.227 22:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:26.227 22:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.484 22:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.484 22:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.484 22:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.484 22:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.484 22:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.778 22:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-secret DHHC-1:03:ZTE3ZTIwNTJmYTVkNGU2NTYxNzM2YTZhYTE5YTY4Yzc0ZjgxYWZjOGQ5NmIyYjJlNWMzYTgyM2Y2MzZlNGJkZAgAJ6s=: 00:15:27.381 22:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.381 22:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:27.381 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.381 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.381 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.381 22:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --dhchap-key key3 00:15:27.381 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.381 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.381 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.381 22:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:27.381 22:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:27.639 22:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:27.639 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:27.639 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:27.639 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:27.639 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.639 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:27.639 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.639 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:27.639 22:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:27.898 2024/07/15 22:10:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:27.898 request: 00:15:27.898 { 00:15:27.898 "method": "bdev_nvme_attach_controller", 00:15:27.898 "params": { 00:15:27.898 "name": "nvme0", 00:15:27.898 "trtype": "tcp", 00:15:27.898 "traddr": "10.0.0.2", 00:15:27.898 "adrfam": "ipv4", 00:15:27.898 "trsvcid": "4420", 00:15:27.898 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:27.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29", 00:15:27.898 "prchk_reftag": false, 00:15:27.898 "prchk_guard": false, 00:15:27.898 "hdgst": false, 00:15:27.898 "ddgst": false, 00:15:27.898 "dhchap_key": "key3" 00:15:27.898 } 00:15:27.898 } 00:15:27.898 Got JSON-RPC error response 00:15:27.898 GoRPCClient: error on JSON-RPC call 00:15:27.898 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:27.898 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:27.898 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:27.898 22:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:27.898 22:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:15:27.898 22:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:15:27.898 22:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:27.898 22:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:28.466 22:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:28.466 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:28.466 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:28.466 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:28.466 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.466 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:28.466 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.466 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:28.466 22:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:28.725 2024/07/15 22:10:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:28.725 request: 00:15:28.725 { 00:15:28.725 "method": "bdev_nvme_attach_controller", 00:15:28.725 "params": { 00:15:28.725 "name": "nvme0", 00:15:28.725 "trtype": "tcp", 00:15:28.725 "traddr": "10.0.0.2", 00:15:28.725 "adrfam": "ipv4", 00:15:28.725 "trsvcid": "4420", 00:15:28.725 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:28.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29", 00:15:28.725 "prchk_reftag": false, 00:15:28.725 "prchk_guard": false, 00:15:28.725 "hdgst": false, 00:15:28.725 "ddgst": false, 00:15:28.725 "dhchap_key": "key3" 00:15:28.725 } 00:15:28.725 } 00:15:28.725 Got JSON-RPC error response 00:15:28.725 GoRPCClient: error on JSON-RPC call 00:15:28.725 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:28.725 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:28.725 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:28.725 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:28.725 22:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:15:28.725 22:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:15:28.725 22:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:15:28.725 22:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:28.725 22:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:28.725 22:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:28.984 22:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:29.243 2024/07/15 22:10:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:29.243 request: 00:15:29.243 { 00:15:29.243 "method": "bdev_nvme_attach_controller", 00:15:29.243 "params": { 00:15:29.243 "name": "nvme0", 00:15:29.243 "trtype": "tcp", 00:15:29.243 "traddr": "10.0.0.2", 00:15:29.243 "adrfam": "ipv4", 00:15:29.243 "trsvcid": "4420", 00:15:29.243 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:29.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29", 00:15:29.243 "prchk_reftag": false, 00:15:29.243 "prchk_guard": false, 00:15:29.243 "hdgst": false, 00:15:29.243 "ddgst": false, 00:15:29.243 "dhchap_key": "key0", 00:15:29.243 "dhchap_ctrlr_key": "key1" 00:15:29.243 } 00:15:29.243 } 00:15:29.243 Got JSON-RPC error response 00:15:29.243 GoRPCClient: error on JSON-RPC call 00:15:29.243 22:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:29.243 22:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:29.243 22:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:29.243 22:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:29.243 22:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:29.243 22:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:29.502 00:15:29.502 22:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:15:29.502 22:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:15:29.502 22:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.760 22:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.760 22:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.760 22:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.019 22:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:15:30.019 22:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:15:30.019 22:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77898 00:15:30.019 22:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77898 ']' 00:15:30.019 22:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77898 00:15:30.019 22:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:30.019 22:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.019 22:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77898 00:15:30.019 killing process with pid 77898 00:15:30.019 22:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:30.019 22:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:30.019 22:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77898' 00:15:30.019 22:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77898 00:15:30.019 22:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77898 00:15:30.277 22:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:30.277 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:30.277 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:30.536 rmmod nvme_tcp 00:15:30.536 rmmod nvme_fabrics 00:15:30.536 rmmod nvme_keyring 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82896 ']' 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82896 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82896 ']' 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82896 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82896 00:15:30.536 killing process with pid 82896 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82896' 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82896 00:15:30.536 22:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82896 00:15:30.795 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:30.795 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:30.796 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:30.796 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.796 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:30.796 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.796 22:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.796 22:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.796 22:10:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:30.796 22:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.89R /tmp/spdk.key-sha256.2XG /tmp/spdk.key-sha384.qWR /tmp/spdk.key-sha512.inK /tmp/spdk.key-sha512.MDs /tmp/spdk.key-sha384.84J /tmp/spdk.key-sha256.BLL '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:30.796 ************************************ 00:15:30.796 END TEST nvmf_auth_target 00:15:30.796 ************************************ 00:15:30.796 00:15:30.796 real 3m5.437s 00:15:30.796 user 7m33.869s 00:15:30.796 sys 0m22.544s 00:15:30.796 22:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:30.796 22:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.796 22:10:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:30.796 22:10:17 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:15:30.796 22:10:17 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:30.796 22:10:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:30.796 22:10:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.796 22:10:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:30.796 ************************************ 00:15:30.796 START TEST nvmf_bdevio_no_huge 00:15:30.796 ************************************ 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:30.796 * Looking for test storage... 00:15:30.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:30.796 Cannot find device "nvmf_tgt_br" 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:30.796 Cannot find device "nvmf_tgt_br2" 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:30.796 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:31.054 Cannot find device "nvmf_tgt_br" 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:31.054 Cannot find device "nvmf_tgt_br2" 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:31.054 22:10:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:31.054 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:31.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:15:31.313 00:15:31.313 --- 10.0.0.2 ping statistics --- 00:15:31.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.313 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:31.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:31.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:31.313 00:15:31.313 --- 10.0.0.3 ping statistics --- 00:15:31.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.313 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:31.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:31.313 00:15:31.313 --- 10.0.0.1 ping statistics --- 00:15:31.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.313 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=83294 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 83294 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 83294 ']' 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.313 22:10:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:31.313 [2024-07-15 22:10:18.102764] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:15:31.313 [2024-07-15 22:10:18.102883] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:31.313 [2024-07-15 22:10:18.250862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:31.571 [2024-07-15 22:10:18.384959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.571 [2024-07-15 22:10:18.385022] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.571 [2024-07-15 22:10:18.385037] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.571 [2024-07-15 22:10:18.385048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.571 [2024-07-15 22:10:18.385057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.571 [2024-07-15 22:10:18.385237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:31.571 [2024-07-15 22:10:18.385797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:31.571 [2024-07-15 22:10:18.385924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:31.571 [2024-07-15 22:10:18.385940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:32.505 [2024-07-15 22:10:19.132927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:32.505 Malloc0 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:32.505 [2024-07-15 22:10:19.174817] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:32.505 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:32.505 { 00:15:32.505 "params": { 00:15:32.505 "name": "Nvme$subsystem", 00:15:32.505 "trtype": "$TEST_TRANSPORT", 00:15:32.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:32.506 "adrfam": "ipv4", 00:15:32.506 "trsvcid": "$NVMF_PORT", 00:15:32.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:32.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:32.506 "hdgst": ${hdgst:-false}, 00:15:32.506 "ddgst": ${ddgst:-false} 00:15:32.506 }, 00:15:32.506 "method": "bdev_nvme_attach_controller" 00:15:32.506 } 00:15:32.506 EOF 00:15:32.506 )") 00:15:32.506 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:15:32.506 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:15:32.506 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:15:32.506 22:10:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:32.506 "params": { 00:15:32.506 "name": "Nvme1", 00:15:32.506 "trtype": "tcp", 00:15:32.506 "traddr": "10.0.0.2", 00:15:32.506 "adrfam": "ipv4", 00:15:32.506 "trsvcid": "4420", 00:15:32.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:32.506 "hdgst": false, 00:15:32.506 "ddgst": false 00:15:32.506 }, 00:15:32.506 "method": "bdev_nvme_attach_controller" 00:15:32.506 }' 00:15:32.506 [2024-07-15 22:10:19.237409] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:15:32.506 [2024-07-15 22:10:19.237506] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83348 ] 00:15:32.506 [2024-07-15 22:10:19.380506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:32.764 [2024-07-15 22:10:19.518662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.764 [2024-07-15 22:10:19.518810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.764 [2024-07-15 22:10:19.518816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.764 I/O targets: 00:15:32.764 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:32.764 00:15:32.764 00:15:32.764 CUnit - A unit testing framework for C - Version 2.1-3 00:15:32.764 http://cunit.sourceforge.net/ 00:15:32.764 00:15:32.764 00:15:32.764 Suite: bdevio tests on: Nvme1n1 00:15:33.022 Test: blockdev write read block ...passed 00:15:33.022 Test: blockdev write zeroes read block ...passed 00:15:33.022 Test: blockdev write zeroes read no split ...passed 00:15:33.022 Test: blockdev write zeroes read split ...passed 00:15:33.022 Test: blockdev write zeroes read split partial ...passed 00:15:33.022 Test: blockdev reset ...[2024-07-15 22:10:19.879677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:33.022 [2024-07-15 22:10:19.879781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b13460 (9): Bad file descriptor 00:15:33.022 passed 00:15:33.022 Test: blockdev write read 8 blocks ...[2024-07-15 22:10:19.893875] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:33.022 passed 00:15:33.022 Test: blockdev write read size > 128k ...passed 00:15:33.022 Test: blockdev write read invalid size ...passed 00:15:33.022 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:33.022 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:33.022 Test: blockdev write read max offset ...passed 00:15:33.281 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:33.281 Test: blockdev writev readv 8 blocks ...passed 00:15:33.281 Test: blockdev writev readv 30 x 1block ...passed 00:15:33.281 Test: blockdev writev readv block ...passed 00:15:33.281 Test: blockdev writev readv size > 128k ...passed 00:15:33.281 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:33.281 Test: blockdev comparev and writev ...[2024-07-15 22:10:20.068459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:33.281 [2024-07-15 22:10:20.068509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:33.281 [2024-07-15 22:10:20.068531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:33.281 [2024-07-15 22:10:20.068542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:33.281 [2024-07-15 22:10:20.069095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:33.281 [2024-07-15 22:10:20.069119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:33.281 [2024-07-15 22:10:20.069137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:33.281 [2024-07-15 22:10:20.069147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:33.281 [2024-07-15 22:10:20.069522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:33.281 [2024-07-15 22:10:20.069544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:33.282 [2024-07-15 22:10:20.069562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:33.282 [2024-07-15 22:10:20.069572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:33.282 [2024-07-15 22:10:20.069942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:33.282 [2024-07-15 22:10:20.069965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:33.282 [2024-07-15 22:10:20.069983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:33.282 [2024-07-15 22:10:20.069993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:33.282 passed 00:15:33.282 Test: blockdev nvme passthru rw ...passed 00:15:33.282 Test: blockdev nvme passthru vendor specific ...passed 00:15:33.282 Test: blockdev nvme admin passthru ...[2024-07-15 22:10:20.152573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:33.282 [2024-07-15 22:10:20.152628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:33.282 [2024-07-15 22:10:20.152787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:33.282 [2024-07-15 22:10:20.152806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:33.282 [2024-07-15 22:10:20.152952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:33.282 [2024-07-15 22:10:20.152970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:33.282 [2024-07-15 22:10:20.153156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:33.282 [2024-07-15 22:10:20.153174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:33.282 passed 00:15:33.282 Test: blockdev copy ...passed 00:15:33.282 00:15:33.282 Run Summary: Type Total Ran Passed Failed Inactive 00:15:33.282 suites 1 1 n/a 0 0 00:15:33.282 tests 23 23 23 0 0 00:15:33.282 asserts 152 152 152 0 n/a 00:15:33.282 00:15:33.282 Elapsed time = 1.018 seconds 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:33.857 rmmod nvme_tcp 00:15:33.857 rmmod nvme_fabrics 00:15:33.857 rmmod nvme_keyring 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 83294 ']' 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 83294 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 83294 ']' 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 83294 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83294 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:33.857 killing process with pid 83294 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83294' 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 83294 00:15:33.857 22:10:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 83294 00:15:34.437 22:10:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.437 22:10:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.437 22:10:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.437 22:10:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.437 22:10:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.437 22:10:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.437 22:10:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.437 22:10:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.437 22:10:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:34.437 00:15:34.437 real 0m3.554s 00:15:34.437 user 0m12.879s 00:15:34.437 sys 0m1.346s 00:15:34.437 22:10:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:34.437 22:10:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.437 ************************************ 00:15:34.437 END TEST nvmf_bdevio_no_huge 00:15:34.437 ************************************ 00:15:34.437 22:10:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:34.437 22:10:21 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:34.437 22:10:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:34.437 22:10:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.437 22:10:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:34.437 ************************************ 00:15:34.437 START TEST nvmf_tls 00:15:34.437 ************************************ 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:34.437 * Looking for test storage... 00:15:34.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:34.437 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:34.438 Cannot find device "nvmf_tgt_br" 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:34.438 Cannot find device "nvmf_tgt_br2" 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:34.438 Cannot find device "nvmf_tgt_br" 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:34.438 Cannot find device "nvmf_tgt_br2" 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:15:34.438 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:34.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:34.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:34.696 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:34.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:15:34.696 00:15:34.697 --- 10.0.0.2 ping statistics --- 00:15:34.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.697 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:34.697 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:34.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:34.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:15:34.697 00:15:34.697 --- 10.0.0.3 ping statistics --- 00:15:34.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.697 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:34.697 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:34.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:34.697 00:15:34.697 --- 10.0.0.1 ping statistics --- 00:15:34.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.697 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:34.697 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.697 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:15:34.697 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:34.697 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.697 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:34.697 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:34.697 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.697 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:34.697 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:34.955 22:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:34.955 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:34.955 22:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:34.955 22:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.955 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83536 00:15:34.955 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:34.955 22:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83536 00:15:34.955 22:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83536 ']' 00:15:34.955 22:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.955 22:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:34.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.955 22:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.955 22:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:34.955 22:10:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.955 [2024-07-15 22:10:21.725284] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:15:34.955 [2024-07-15 22:10:21.725710] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.955 [2024-07-15 22:10:21.873259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.213 [2024-07-15 22:10:21.942386] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.213 [2024-07-15 22:10:21.942448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.213 [2024-07-15 22:10:21.942462] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.213 [2024-07-15 22:10:21.942472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.213 [2024-07-15 22:10:21.942481] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.213 [2024-07-15 22:10:21.942516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.780 22:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:35.780 22:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:35.780 22:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:35.780 22:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:35.780 22:10:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.038 22:10:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.038 22:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:15:36.038 22:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:36.296 true 00:15:36.296 22:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:36.296 22:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:15:36.555 22:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:15:36.555 22:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:15:36.555 22:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:36.812 22:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:15:36.812 22:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:37.069 22:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:15:37.069 22:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:15:37.069 22:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:37.327 22:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:15:37.327 22:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:37.584 22:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:15:37.585 22:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:15:37.585 22:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:15:37.585 22:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:37.843 22:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:15:37.843 22:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:15:37.843 22:10:24 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:38.409 22:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:38.409 22:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:15:38.409 22:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:15:38.409 22:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:15:38.409 22:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:38.667 22:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:38.667 22:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:15:39.234 22:10:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:39.235 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:39.235 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:15:39.235 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.NjQlOJ9AJo 00:15:39.235 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:39.235 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.1Pbpt2IxJG 00:15:39.235 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:39.235 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:39.235 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.NjQlOJ9AJo 00:15:39.235 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.1Pbpt2IxJG 00:15:39.235 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:39.493 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:39.752 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.NjQlOJ9AJo 00:15:39.752 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.NjQlOJ9AJo 00:15:39.752 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:40.010 [2024-07-15 22:10:26.940010] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.268 22:10:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:40.526 22:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:40.784 [2024-07-15 22:10:27.488141] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:40.784 [2024-07-15 22:10:27.488363] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.784 22:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:41.042 malloc0 00:15:41.042 22:10:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:41.301 22:10:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NjQlOJ9AJo 00:15:41.559 [2024-07-15 22:10:28.383025] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:41.559 22:10:28 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.NjQlOJ9AJo 00:15:53.764 Initializing NVMe Controllers 00:15:53.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:53.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:53.764 Initialization complete. Launching workers. 00:15:53.764 ======================================================== 00:15:53.764 Latency(us) 00:15:53.764 Device Information : IOPS MiB/s Average min max 00:15:53.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8922.67 34.85 7174.73 1617.83 12310.96 00:15:53.764 ======================================================== 00:15:53.764 Total : 8922.67 34.85 7174.73 1617.83 12310.96 00:15:53.764 00:15:53.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NjQlOJ9AJo 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NjQlOJ9AJo' 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83902 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83902 /var/tmp/bdevperf.sock 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83902 ']' 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.764 22:10:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:53.764 [2024-07-15 22:10:38.653789] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:15:53.764 [2024-07-15 22:10:38.653913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83902 ] 00:15:53.764 [2024-07-15 22:10:38.792005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.764 [2024-07-15 22:10:38.853324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.764 22:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.764 22:10:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:53.764 22:10:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NjQlOJ9AJo 00:15:53.764 [2024-07-15 22:10:39.950116] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:53.764 [2024-07-15 22:10:39.950467] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:53.764 TLSTESTn1 00:15:53.764 22:10:40 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:53.764 Running I/O for 10 seconds... 00:16:03.768 00:16:03.768 Latency(us) 00:16:03.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.768 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:03.768 Verification LBA range: start 0x0 length 0x2000 00:16:03.768 TLSTESTn1 : 10.02 4127.53 16.12 0.00 0.00 30940.36 8579.26 40989.79 00:16:03.768 =================================================================================================================== 00:16:03.768 Total : 4127.53 16.12 0.00 0.00 30940.36 8579.26 40989.79 00:16:03.768 0 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83902 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83902 ']' 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83902 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83902 00:16:03.768 killing process with pid 83902 00:16:03.768 Received shutdown signal, test time was about 10.000000 seconds 00:16:03.768 00:16:03.768 Latency(us) 00:16:03.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.768 =================================================================================================================== 00:16:03.768 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83902' 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83902 00:16:03.768 [2024-07-15 22:10:50.222778] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83902 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1Pbpt2IxJG 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1Pbpt2IxJG 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1Pbpt2IxJG 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1Pbpt2IxJG' 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84050 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84050 /var/tmp/bdevperf.sock 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84050 ']' 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:03.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:03.768 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.768 [2024-07-15 22:10:50.443427] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:03.768 [2024-07-15 22:10:50.443657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84050 ] 00:16:03.768 [2024-07-15 22:10:50.575104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.768 [2024-07-15 22:10:50.663277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.027 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.027 22:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:04.027 22:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1Pbpt2IxJG 00:16:04.285 [2024-07-15 22:10:50.980585] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:04.285 [2024-07-15 22:10:50.980704] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:04.285 [2024-07-15 22:10:50.985672] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:04.285 [2024-07-15 22:10:50.986255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7feca0 (107): Transport endpoint is not connected 00:16:04.285 [2024-07-15 22:10:50.987237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7feca0 (9): Bad file descriptor 00:16:04.285 [2024-07-15 22:10:50.988232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:04.285 [2024-07-15 22:10:50.988260] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:04.285 [2024-07-15 22:10:50.988275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:04.286 2024/07/15 22:10:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.1Pbpt2IxJG subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:04.286 request: 00:16:04.286 { 00:16:04.286 "method": "bdev_nvme_attach_controller", 00:16:04.286 "params": { 00:16:04.286 "name": "TLSTEST", 00:16:04.286 "trtype": "tcp", 00:16:04.286 "traddr": "10.0.0.2", 00:16:04.286 "adrfam": "ipv4", 00:16:04.286 "trsvcid": "4420", 00:16:04.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:04.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:04.286 "prchk_reftag": false, 00:16:04.286 "prchk_guard": false, 00:16:04.286 "hdgst": false, 00:16:04.286 "ddgst": false, 00:16:04.286 "psk": "/tmp/tmp.1Pbpt2IxJG" 00:16:04.286 } 00:16:04.286 } 00:16:04.286 Got JSON-RPC error response 00:16:04.286 GoRPCClient: error on JSON-RPC call 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84050 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84050 ']' 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84050 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84050 00:16:04.286 killing process with pid 84050 00:16:04.286 Received shutdown signal, test time was about 10.000000 seconds 00:16:04.286 00:16:04.286 Latency(us) 00:16:04.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.286 =================================================================================================================== 00:16:04.286 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84050' 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84050 00:16:04.286 [2024-07-15 22:10:51.037634] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84050 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NjQlOJ9AJo 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NjQlOJ9AJo 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:04.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NjQlOJ9AJo 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NjQlOJ9AJo' 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84078 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84078 /var/tmp/bdevperf.sock 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84078 ']' 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.286 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:04.544 [2024-07-15 22:10:51.251398] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:04.544 [2024-07-15 22:10:51.251515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84078 ] 00:16:04.544 [2024-07-15 22:10:51.383199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.544 [2024-07-15 22:10:51.471328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.803 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.803 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:04.803 22:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.NjQlOJ9AJo 00:16:05.062 [2024-07-15 22:10:51.903342] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:05.062 [2024-07-15 22:10:51.903495] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:05.062 [2024-07-15 22:10:51.913263] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:05.062 [2024-07-15 22:10:51.913311] posix.c: 528:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:05.062 [2024-07-15 22:10:51.913389] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:05.062 [2024-07-15 22:10:51.913645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fdca0 (107): Transport endpoint is not connected 00:16:05.062 [2024-07-15 22:10:51.914622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fdca0 (9): Bad file descriptor 00:16:05.062 [2024-07-15 22:10:51.915616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:05.062 [2024-07-15 22:10:51.915651] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:05.062 [2024-07-15 22:10:51.915669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:05.062 2024/07/15 22:10:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.NjQlOJ9AJo subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:05.062 request: 00:16:05.062 { 00:16:05.062 "method": "bdev_nvme_attach_controller", 00:16:05.062 "params": { 00:16:05.062 "name": "TLSTEST", 00:16:05.062 "trtype": "tcp", 00:16:05.062 "traddr": "10.0.0.2", 00:16:05.062 "adrfam": "ipv4", 00:16:05.062 "trsvcid": "4420", 00:16:05.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:05.062 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:05.062 "prchk_reftag": false, 00:16:05.062 "prchk_guard": false, 00:16:05.062 "hdgst": false, 00:16:05.062 "ddgst": false, 00:16:05.062 "psk": "/tmp/tmp.NjQlOJ9AJo" 00:16:05.062 } 00:16:05.062 } 00:16:05.062 Got JSON-RPC error response 00:16:05.062 GoRPCClient: error on JSON-RPC call 00:16:05.062 22:10:51 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84078 00:16:05.062 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84078 ']' 00:16:05.062 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84078 00:16:05.062 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:05.062 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:05.062 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84078 00:16:05.062 killing process with pid 84078 00:16:05.062 Received shutdown signal, test time was about 10.000000 seconds 00:16:05.062 00:16:05.062 Latency(us) 00:16:05.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.062 =================================================================================================================== 00:16:05.062 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:05.062 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:05.062 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:05.062 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84078' 00:16:05.062 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84078 00:16:05.062 [2024-07-15 22:10:51.966015] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:05.062 22:10:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84078 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NjQlOJ9AJo 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NjQlOJ9AJo 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NjQlOJ9AJo 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NjQlOJ9AJo' 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84110 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84110 /var/tmp/bdevperf.sock 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84110 ']' 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.321 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.321 [2024-07-15 22:10:52.181218] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:05.321 [2024-07-15 22:10:52.181308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84110 ] 00:16:05.578 [2024-07-15 22:10:52.315488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.578 [2024-07-15 22:10:52.375502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.578 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.578 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:05.578 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NjQlOJ9AJo 00:16:05.836 [2024-07-15 22:10:52.735711] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:05.836 [2024-07-15 22:10:52.735825] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:05.836 [2024-07-15 22:10:52.743368] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:05.836 [2024-07-15 22:10:52.743420] posix.c: 528:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:05.836 [2024-07-15 22:10:52.743479] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:05.836 [2024-07-15 22:10:52.744411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cbca0 (107): Transport endpoint is not connected 00:16:05.836 [2024-07-15 22:10:52.745391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cbca0 (9): Bad file descriptor 00:16:05.836 [2024-07-15 22:10:52.746388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:05.836 [2024-07-15 22:10:52.746423] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:05.836 [2024-07-15 22:10:52.746440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:05.836 2024/07/15 22:10:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.NjQlOJ9AJo subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:05.836 request: 00:16:05.836 { 00:16:05.836 "method": "bdev_nvme_attach_controller", 00:16:05.836 "params": { 00:16:05.836 "name": "TLSTEST", 00:16:05.836 "trtype": "tcp", 00:16:05.836 "traddr": "10.0.0.2", 00:16:05.836 "adrfam": "ipv4", 00:16:05.836 "trsvcid": "4420", 00:16:05.836 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:05.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:05.836 "prchk_reftag": false, 00:16:05.836 "prchk_guard": false, 00:16:05.836 "hdgst": false, 00:16:05.836 "ddgst": false, 00:16:05.836 "psk": "/tmp/tmp.NjQlOJ9AJo" 00:16:05.836 } 00:16:05.836 } 00:16:05.836 Got JSON-RPC error response 00:16:05.836 GoRPCClient: error on JSON-RPC call 00:16:05.836 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84110 00:16:05.836 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84110 ']' 00:16:05.836 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84110 00:16:05.836 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:05.836 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:05.836 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84110 00:16:06.099 killing process with pid 84110 00:16:06.099 Received shutdown signal, test time was about 10.000000 seconds 00:16:06.099 00:16:06.099 Latency(us) 00:16:06.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.099 =================================================================================================================== 00:16:06.099 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84110' 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84110 00:16:06.099 [2024-07-15 22:10:52.794996] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84110 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84142 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84142 /var/tmp/bdevperf.sock 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84142 ']' 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:06.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.099 22:10:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.099 [2024-07-15 22:10:53.007627] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:06.099 [2024-07-15 22:10:53.007727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84142 ] 00:16:06.364 [2024-07-15 22:10:53.139924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.364 [2024-07-15 22:10:53.228321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.297 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.297 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:07.297 22:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:07.554 [2024-07-15 22:10:54.333961] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:07.554 [2024-07-15 22:10:54.335353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ab240 (9): Bad file descriptor 00:16:07.554 [2024-07-15 22:10:54.336346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:07.554 [2024-07-15 22:10:54.336386] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:07.554 [2024-07-15 22:10:54.336402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:07.554 2024/07/15 22:10:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:07.554 request: 00:16:07.554 { 00:16:07.554 "method": "bdev_nvme_attach_controller", 00:16:07.554 "params": { 00:16:07.554 "name": "TLSTEST", 00:16:07.554 "trtype": "tcp", 00:16:07.554 "traddr": "10.0.0.2", 00:16:07.554 "adrfam": "ipv4", 00:16:07.554 "trsvcid": "4420", 00:16:07.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:07.554 "prchk_reftag": false, 00:16:07.554 "prchk_guard": false, 00:16:07.554 "hdgst": false, 00:16:07.554 "ddgst": false 00:16:07.554 } 00:16:07.554 } 00:16:07.554 Got JSON-RPC error response 00:16:07.554 GoRPCClient: error on JSON-RPC call 00:16:07.554 22:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84142 00:16:07.554 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84142 ']' 00:16:07.554 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84142 00:16:07.554 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:07.554 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.554 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84142 00:16:07.554 killing process with pid 84142 00:16:07.554 Received shutdown signal, test time was about 10.000000 seconds 00:16:07.554 00:16:07.554 Latency(us) 00:16:07.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.554 =================================================================================================================== 00:16:07.554 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:07.554 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:07.554 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:07.554 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84142' 00:16:07.554 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84142 00:16:07.554 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84142 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 83536 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83536 ']' 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83536 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83536 00:16:07.821 killing process with pid 83536 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83536' 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83536 00:16:07.821 [2024-07-15 22:10:54.585911] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83536 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:16:07.821 22:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:08.080 22:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:08.080 22:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:16:08.080 22:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.GdqwpZOJPt 00:16:08.080 22:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:08.080 22:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.GdqwpZOJPt 00:16:08.080 22:10:54 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:16:08.080 22:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:08.080 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:08.080 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:08.080 22:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84198 00:16:08.081 22:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:08.081 22:10:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84198 00:16:08.081 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84198 ']' 00:16:08.081 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.081 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.081 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.081 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.081 22:10:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:08.081 [2024-07-15 22:10:54.880480] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:08.081 [2024-07-15 22:10:54.881046] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.081 [2024-07-15 22:10:55.015007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.343 [2024-07-15 22:10:55.073488] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.343 [2024-07-15 22:10:55.073541] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.343 [2024-07-15 22:10:55.073553] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.343 [2024-07-15 22:10:55.073561] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.343 [2024-07-15 22:10:55.073568] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.343 [2024-07-15 22:10:55.073593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.343 22:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.343 22:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:08.343 22:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:08.343 22:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:08.343 22:10:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:08.343 22:10:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.343 22:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.GdqwpZOJPt 00:16:08.343 22:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GdqwpZOJPt 00:16:08.343 22:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:08.600 [2024-07-15 22:10:55.453482] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.600 22:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:08.856 22:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:09.114 [2024-07-15 22:10:55.977607] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:09.114 [2024-07-15 22:10:55.977823] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.114 22:10:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:09.372 malloc0 00:16:09.372 22:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:09.628 22:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GdqwpZOJPt 00:16:09.886 [2024-07-15 22:10:56.756559] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GdqwpZOJPt 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GdqwpZOJPt' 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84287 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84287 /var/tmp/bdevperf.sock 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84287 ']' 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.886 22:10:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.143 [2024-07-15 22:10:56.847673] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:10.144 [2024-07-15 22:10:56.847806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84287 ] 00:16:10.144 [2024-07-15 22:10:56.996809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.144 [2024-07-15 22:10:57.082859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.077 22:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:11.077 22:10:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:11.077 22:10:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GdqwpZOJPt 00:16:11.335 [2024-07-15 22:10:58.137827] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:11.335 [2024-07-15 22:10:58.137948] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:11.335 TLSTESTn1 00:16:11.335 22:10:58 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:11.593 Running I/O for 10 seconds... 00:16:21.572 00:16:21.572 Latency(us) 00:16:21.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.572 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:21.572 Verification LBA range: start 0x0 length 0x2000 00:16:21.572 TLSTESTn1 : 10.02 3642.41 14.23 0.00 0.00 35071.98 7685.59 35031.97 00:16:21.572 =================================================================================================================== 00:16:21.572 Total : 3642.41 14.23 0.00 0.00 35071.98 7685.59 35031.97 00:16:21.572 0 00:16:21.572 22:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:21.572 22:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84287 00:16:21.572 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84287 ']' 00:16:21.572 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84287 00:16:21.572 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:21.572 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.572 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84287 00:16:21.572 killing process with pid 84287 00:16:21.572 Received shutdown signal, test time was about 10.000000 seconds 00:16:21.572 00:16:21.572 Latency(us) 00:16:21.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.572 =================================================================================================================== 00:16:21.572 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:21.572 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:21.572 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:21.572 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84287' 00:16:21.572 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84287 00:16:21.572 [2024-07-15 22:11:08.401691] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:21.572 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84287 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.GdqwpZOJPt 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GdqwpZOJPt 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GdqwpZOJPt 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GdqwpZOJPt 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GdqwpZOJPt' 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84434 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84434 /var/tmp/bdevperf.sock 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84434 ']' 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:21.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.830 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:21.830 [2024-07-15 22:11:08.627170] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:21.830 [2024-07-15 22:11:08.627486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84434 ] 00:16:21.830 [2024-07-15 22:11:08.764175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.088 [2024-07-15 22:11:08.826745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.088 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.088 22:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:22.088 22:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GdqwpZOJPt 00:16:22.346 [2024-07-15 22:11:09.206837] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:22.346 [2024-07-15 22:11:09.206920] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:22.346 [2024-07-15 22:11:09.206932] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.GdqwpZOJPt 00:16:22.346 2024/07/15 22:11:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.GdqwpZOJPt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:16:22.346 request: 00:16:22.346 { 00:16:22.346 "method": "bdev_nvme_attach_controller", 00:16:22.346 "params": { 00:16:22.346 "name": "TLSTEST", 00:16:22.346 "trtype": "tcp", 00:16:22.346 "traddr": "10.0.0.2", 00:16:22.346 "adrfam": "ipv4", 00:16:22.346 "trsvcid": "4420", 00:16:22.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:22.346 "prchk_reftag": false, 00:16:22.346 "prchk_guard": false, 00:16:22.346 "hdgst": false, 00:16:22.346 "ddgst": false, 00:16:22.346 "psk": "/tmp/tmp.GdqwpZOJPt" 00:16:22.346 } 00:16:22.346 } 00:16:22.346 Got JSON-RPC error response 00:16:22.346 GoRPCClient: error on JSON-RPC call 00:16:22.346 22:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84434 00:16:22.346 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84434 ']' 00:16:22.346 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84434 00:16:22.346 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:22.346 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:22.346 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84434 00:16:22.346 killing process with pid 84434 00:16:22.346 Received shutdown signal, test time was about 10.000000 seconds 00:16:22.346 00:16:22.346 Latency(us) 00:16:22.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.346 =================================================================================================================== 00:16:22.346 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:22.346 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:22.346 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:22.346 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84434' 00:16:22.346 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84434 00:16:22.346 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84434 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 84198 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84198 ']' 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84198 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84198 00:16:22.603 killing process with pid 84198 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84198' 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84198 00:16:22.603 [2024-07-15 22:11:09.454780] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:22.603 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84198 00:16:22.861 22:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:16:22.861 22:11:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:22.861 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:22.861 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.861 22:11:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84470 00:16:22.861 22:11:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84470 00:16:22.861 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84470 ']' 00:16:22.861 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.861 22:11:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:22.861 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.861 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.861 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.861 22:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.861 [2024-07-15 22:11:09.689710] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:22.861 [2024-07-15 22:11:09.689816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.119 [2024-07-15 22:11:09.823439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.119 [2024-07-15 22:11:09.892566] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.119 [2024-07-15 22:11:09.892625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.119 [2024-07-15 22:11:09.892637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.119 [2024-07-15 22:11:09.892646] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.119 [2024-07-15 22:11:09.892653] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.119 [2024-07-15 22:11:09.892678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.GdqwpZOJPt 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.GdqwpZOJPt 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.GdqwpZOJPt 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GdqwpZOJPt 00:16:23.119 22:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:23.682 [2024-07-15 22:11:10.324372] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.682 22:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:23.682 22:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:23.939 [2024-07-15 22:11:10.880488] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:23.939 [2024-07-15 22:11:10.880701] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.196 22:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:24.453 malloc0 00:16:24.453 22:11:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:24.710 22:11:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GdqwpZOJPt 00:16:24.710 [2024-07-15 22:11:11.639456] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:24.710 [2024-07-15 22:11:11.639509] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:16:24.710 [2024-07-15 22:11:11.639554] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:24.710 2024/07/15 22:11:11 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.GdqwpZOJPt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:16:24.710 request: 00:16:24.710 { 00:16:24.710 "method": "nvmf_subsystem_add_host", 00:16:24.710 "params": { 00:16:24.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.710 "host": "nqn.2016-06.io.spdk:host1", 00:16:24.710 "psk": "/tmp/tmp.GdqwpZOJPt" 00:16:24.710 } 00:16:24.710 } 00:16:24.710 Got JSON-RPC error response 00:16:24.710 GoRPCClient: error on JSON-RPC call 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84470 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84470 ']' 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84470 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84470 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:24.968 killing process with pid 84470 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84470' 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84470 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84470 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.GdqwpZOJPt 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84564 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84564 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84564 ']' 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.968 22:11:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.226 [2024-07-15 22:11:11.926624] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:25.227 [2024-07-15 22:11:11.926720] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.227 [2024-07-15 22:11:12.059291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.227 [2024-07-15 22:11:12.121991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.227 [2024-07-15 22:11:12.122048] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.227 [2024-07-15 22:11:12.122060] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.227 [2024-07-15 22:11:12.122069] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.227 [2024-07-15 22:11:12.122078] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.227 [2024-07-15 22:11:12.122131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.159 22:11:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.159 22:11:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:26.159 22:11:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:26.159 22:11:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:26.159 22:11:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:26.159 22:11:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.159 22:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.GdqwpZOJPt 00:16:26.159 22:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GdqwpZOJPt 00:16:26.159 22:11:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:26.417 [2024-07-15 22:11:13.253986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.417 22:11:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:26.674 22:11:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:26.934 [2024-07-15 22:11:13.878194] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:26.934 [2024-07-15 22:11:13.878506] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.190 22:11:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:27.447 malloc0 00:16:27.447 22:11:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:27.705 22:11:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GdqwpZOJPt 00:16:27.963 [2024-07-15 22:11:14.801020] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:27.964 22:11:14 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84672 00:16:27.964 22:11:14 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:27.964 22:11:14 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:27.964 22:11:14 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84672 /var/tmp/bdevperf.sock 00:16:27.964 22:11:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84672 ']' 00:16:27.964 22:11:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:27.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:27.964 22:11:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.964 22:11:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:27.964 22:11:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.964 22:11:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.964 [2024-07-15 22:11:14.895195] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:27.964 [2024-07-15 22:11:14.895328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84672 ] 00:16:28.222 [2024-07-15 22:11:15.042017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.222 [2024-07-15 22:11:15.146576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.482 22:11:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.482 22:11:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:28.482 22:11:15 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GdqwpZOJPt 00:16:28.739 [2024-07-15 22:11:15.512196] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:28.739 [2024-07-15 22:11:15.512315] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:28.739 TLSTESTn1 00:16:28.739 22:11:15 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:29.306 22:11:16 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:16:29.306 "subsystems": [ 00:16:29.306 { 00:16:29.306 "subsystem": "keyring", 00:16:29.306 "config": [] 00:16:29.306 }, 00:16:29.306 { 00:16:29.306 "subsystem": "iobuf", 00:16:29.306 "config": [ 00:16:29.306 { 00:16:29.306 "method": "iobuf_set_options", 00:16:29.306 "params": { 00:16:29.306 "large_bufsize": 135168, 00:16:29.306 "large_pool_count": 1024, 00:16:29.306 "small_bufsize": 8192, 00:16:29.306 "small_pool_count": 8192 00:16:29.306 } 00:16:29.306 } 00:16:29.306 ] 00:16:29.306 }, 00:16:29.306 { 00:16:29.306 "subsystem": "sock", 00:16:29.306 "config": [ 00:16:29.306 { 00:16:29.306 "method": "sock_set_default_impl", 00:16:29.306 "params": { 00:16:29.306 "impl_name": "posix" 00:16:29.306 } 00:16:29.306 }, 00:16:29.306 { 00:16:29.306 "method": "sock_impl_set_options", 00:16:29.306 "params": { 00:16:29.306 "enable_ktls": false, 00:16:29.306 "enable_placement_id": 0, 00:16:29.306 "enable_quickack": false, 00:16:29.306 "enable_recv_pipe": true, 00:16:29.306 "enable_zerocopy_send_client": false, 00:16:29.306 "enable_zerocopy_send_server": true, 00:16:29.306 "impl_name": "ssl", 00:16:29.306 "recv_buf_size": 4096, 00:16:29.306 "send_buf_size": 4096, 00:16:29.306 "tls_version": 0, 00:16:29.306 "zerocopy_threshold": 0 00:16:29.306 } 00:16:29.306 }, 00:16:29.306 { 00:16:29.306 "method": "sock_impl_set_options", 00:16:29.306 "params": { 00:16:29.306 "enable_ktls": false, 00:16:29.306 "enable_placement_id": 0, 00:16:29.306 "enable_quickack": false, 00:16:29.306 "enable_recv_pipe": true, 00:16:29.306 "enable_zerocopy_send_client": false, 00:16:29.306 "enable_zerocopy_send_server": true, 00:16:29.306 "impl_name": "posix", 00:16:29.306 "recv_buf_size": 2097152, 00:16:29.306 "send_buf_size": 2097152, 00:16:29.306 "tls_version": 0, 00:16:29.306 "zerocopy_threshold": 0 00:16:29.306 } 00:16:29.306 } 00:16:29.306 ] 00:16:29.306 }, 00:16:29.306 { 00:16:29.306 "subsystem": "vmd", 00:16:29.306 "config": [] 00:16:29.306 }, 00:16:29.306 { 00:16:29.306 "subsystem": "accel", 00:16:29.306 "config": [ 00:16:29.306 { 00:16:29.306 "method": "accel_set_options", 00:16:29.306 "params": { 00:16:29.306 "buf_count": 2048, 00:16:29.306 "large_cache_size": 16, 00:16:29.306 "sequence_count": 2048, 00:16:29.306 "small_cache_size": 128, 00:16:29.306 "task_count": 2048 00:16:29.306 } 00:16:29.306 } 00:16:29.306 ] 00:16:29.306 }, 00:16:29.306 { 00:16:29.306 "subsystem": "bdev", 00:16:29.306 "config": [ 00:16:29.306 { 00:16:29.306 "method": "bdev_set_options", 00:16:29.306 "params": { 00:16:29.306 "bdev_auto_examine": true, 00:16:29.306 "bdev_io_cache_size": 256, 00:16:29.306 "bdev_io_pool_size": 65535, 00:16:29.306 "iobuf_large_cache_size": 16, 00:16:29.306 "iobuf_small_cache_size": 128 00:16:29.306 } 00:16:29.306 }, 00:16:29.306 { 00:16:29.306 "method": "bdev_raid_set_options", 00:16:29.306 "params": { 00:16:29.306 "process_window_size_kb": 1024 00:16:29.306 } 00:16:29.306 }, 00:16:29.306 { 00:16:29.306 "method": "bdev_iscsi_set_options", 00:16:29.306 "params": { 00:16:29.306 "timeout_sec": 30 00:16:29.306 } 00:16:29.306 }, 00:16:29.306 { 00:16:29.306 "method": "bdev_nvme_set_options", 00:16:29.306 "params": { 00:16:29.306 "action_on_timeout": "none", 00:16:29.306 "allow_accel_sequence": false, 00:16:29.306 "arbitration_burst": 0, 00:16:29.306 "bdev_retry_count": 3, 00:16:29.306 "ctrlr_loss_timeout_sec": 0, 00:16:29.306 "delay_cmd_submit": true, 00:16:29.306 "dhchap_dhgroups": [ 00:16:29.306 "null", 00:16:29.306 "ffdhe2048", 00:16:29.306 "ffdhe3072", 00:16:29.306 "ffdhe4096", 00:16:29.306 "ffdhe6144", 00:16:29.306 "ffdhe8192" 00:16:29.306 ], 00:16:29.306 "dhchap_digests": [ 00:16:29.306 "sha256", 00:16:29.306 "sha384", 00:16:29.306 "sha512" 00:16:29.306 ], 00:16:29.306 "disable_auto_failback": false, 00:16:29.306 "fast_io_fail_timeout_sec": 0, 00:16:29.306 "generate_uuids": false, 00:16:29.306 "high_priority_weight": 0, 00:16:29.306 "io_path_stat": false, 00:16:29.306 "io_queue_requests": 0, 00:16:29.306 "keep_alive_timeout_ms": 10000, 00:16:29.306 "low_priority_weight": 0, 00:16:29.306 "medium_priority_weight": 0, 00:16:29.306 "nvme_adminq_poll_period_us": 10000, 00:16:29.306 "nvme_error_stat": false, 00:16:29.306 "nvme_ioq_poll_period_us": 0, 00:16:29.306 "rdma_cm_event_timeout_ms": 0, 00:16:29.306 "rdma_max_cq_size": 0, 00:16:29.307 "rdma_srq_size": 0, 00:16:29.307 "reconnect_delay_sec": 0, 00:16:29.307 "timeout_admin_us": 0, 00:16:29.307 "timeout_us": 0, 00:16:29.307 "transport_ack_timeout": 0, 00:16:29.307 "transport_retry_count": 4, 00:16:29.307 "transport_tos": 0 00:16:29.307 } 00:16:29.307 }, 00:16:29.307 { 00:16:29.307 "method": "bdev_nvme_set_hotplug", 00:16:29.307 "params": { 00:16:29.307 "enable": false, 00:16:29.307 "period_us": 100000 00:16:29.307 } 00:16:29.307 }, 00:16:29.307 { 00:16:29.307 "method": "bdev_malloc_create", 00:16:29.307 "params": { 00:16:29.307 "block_size": 4096, 00:16:29.307 "name": "malloc0", 00:16:29.307 "num_blocks": 8192, 00:16:29.307 "optimal_io_boundary": 0, 00:16:29.307 "physical_block_size": 4096, 00:16:29.307 "uuid": "3a84251d-2bba-4601-8237-9847b014948f" 00:16:29.307 } 00:16:29.307 }, 00:16:29.307 { 00:16:29.307 "method": "bdev_wait_for_examine" 00:16:29.307 } 00:16:29.307 ] 00:16:29.307 }, 00:16:29.307 { 00:16:29.307 "subsystem": "nbd", 00:16:29.307 "config": [] 00:16:29.307 }, 00:16:29.307 { 00:16:29.307 "subsystem": "scheduler", 00:16:29.307 "config": [ 00:16:29.307 { 00:16:29.307 "method": "framework_set_scheduler", 00:16:29.307 "params": { 00:16:29.307 "name": "static" 00:16:29.307 } 00:16:29.307 } 00:16:29.307 ] 00:16:29.307 }, 00:16:29.307 { 00:16:29.307 "subsystem": "nvmf", 00:16:29.307 "config": [ 00:16:29.307 { 00:16:29.307 "method": "nvmf_set_config", 00:16:29.307 "params": { 00:16:29.307 "admin_cmd_passthru": { 00:16:29.307 "identify_ctrlr": false 00:16:29.307 }, 00:16:29.307 "discovery_filter": "match_any" 00:16:29.307 } 00:16:29.307 }, 00:16:29.307 { 00:16:29.307 "method": "nvmf_set_max_subsystems", 00:16:29.307 "params": { 00:16:29.307 "max_subsystems": 1024 00:16:29.307 } 00:16:29.307 }, 00:16:29.307 { 00:16:29.307 "method": "nvmf_set_crdt", 00:16:29.307 "params": { 00:16:29.307 "crdt1": 0, 00:16:29.307 "crdt2": 0, 00:16:29.307 "crdt3": 0 00:16:29.307 } 00:16:29.307 }, 00:16:29.307 { 00:16:29.307 "method": "nvmf_create_transport", 00:16:29.307 "params": { 00:16:29.307 "abort_timeout_sec": 1, 00:16:29.307 "ack_timeout": 0, 00:16:29.307 "buf_cache_size": 4294967295, 00:16:29.307 "c2h_success": false, 00:16:29.307 "data_wr_pool_size": 0, 00:16:29.307 "dif_insert_or_strip": false, 00:16:29.307 "in_capsule_data_size": 4096, 00:16:29.307 "io_unit_size": 131072, 00:16:29.307 "max_aq_depth": 128, 00:16:29.307 "max_io_qpairs_per_ctrlr": 127, 00:16:29.307 "max_io_size": 131072, 00:16:29.307 "max_queue_depth": 128, 00:16:29.307 "num_shared_buffers": 511, 00:16:29.307 "sock_priority": 0, 00:16:29.307 "trtype": "TCP", 00:16:29.307 "zcopy": false 00:16:29.307 } 00:16:29.307 }, 00:16:29.307 { 00:16:29.307 "method": "nvmf_create_subsystem", 00:16:29.307 "params": { 00:16:29.307 "allow_any_host": false, 00:16:29.307 "ana_reporting": false, 00:16:29.307 "max_cntlid": 65519, 00:16:29.307 "max_namespaces": 10, 00:16:29.307 "min_cntlid": 1, 00:16:29.307 "model_number": "SPDK bdev Controller", 00:16:29.307 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.307 "serial_number": "SPDK00000000000001" 00:16:29.307 } 00:16:29.307 }, 00:16:29.307 { 00:16:29.307 "method": "nvmf_subsystem_add_host", 00:16:29.307 "params": { 00:16:29.307 "host": "nqn.2016-06.io.spdk:host1", 00:16:29.307 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.307 "psk": "/tmp/tmp.GdqwpZOJPt" 00:16:29.307 } 00:16:29.307 }, 00:16:29.307 { 00:16:29.307 "method": "nvmf_subsystem_add_ns", 00:16:29.307 "params": { 00:16:29.307 "namespace": { 00:16:29.307 "bdev_name": "malloc0", 00:16:29.307 "nguid": "3A84251D2BBA460182379847B014948F", 00:16:29.307 "no_auto_visible": false, 00:16:29.307 "nsid": 1, 00:16:29.307 "uuid": "3a84251d-2bba-4601-8237-9847b014948f" 00:16:29.307 }, 00:16:29.307 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:29.307 } 00:16:29.307 }, 00:16:29.307 { 00:16:29.307 "method": "nvmf_subsystem_add_listener", 00:16:29.307 "params": { 00:16:29.307 "listen_address": { 00:16:29.307 "adrfam": "IPv4", 00:16:29.307 "traddr": "10.0.0.2", 00:16:29.307 "trsvcid": "4420", 00:16:29.307 "trtype": "TCP" 00:16:29.307 }, 00:16:29.307 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.307 "secure_channel": true 00:16:29.307 } 00:16:29.307 } 00:16:29.307 ] 00:16:29.307 } 00:16:29.307 ] 00:16:29.307 }' 00:16:29.307 22:11:16 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:29.871 22:11:16 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:16:29.871 "subsystems": [ 00:16:29.871 { 00:16:29.871 "subsystem": "keyring", 00:16:29.871 "config": [] 00:16:29.871 }, 00:16:29.871 { 00:16:29.871 "subsystem": "iobuf", 00:16:29.871 "config": [ 00:16:29.871 { 00:16:29.871 "method": "iobuf_set_options", 00:16:29.871 "params": { 00:16:29.871 "large_bufsize": 135168, 00:16:29.871 "large_pool_count": 1024, 00:16:29.871 "small_bufsize": 8192, 00:16:29.871 "small_pool_count": 8192 00:16:29.871 } 00:16:29.871 } 00:16:29.871 ] 00:16:29.871 }, 00:16:29.871 { 00:16:29.871 "subsystem": "sock", 00:16:29.871 "config": [ 00:16:29.871 { 00:16:29.871 "method": "sock_set_default_impl", 00:16:29.871 "params": { 00:16:29.871 "impl_name": "posix" 00:16:29.871 } 00:16:29.871 }, 00:16:29.871 { 00:16:29.871 "method": "sock_impl_set_options", 00:16:29.871 "params": { 00:16:29.871 "enable_ktls": false, 00:16:29.871 "enable_placement_id": 0, 00:16:29.871 "enable_quickack": false, 00:16:29.871 "enable_recv_pipe": true, 00:16:29.871 "enable_zerocopy_send_client": false, 00:16:29.871 "enable_zerocopy_send_server": true, 00:16:29.871 "impl_name": "ssl", 00:16:29.871 "recv_buf_size": 4096, 00:16:29.871 "send_buf_size": 4096, 00:16:29.871 "tls_version": 0, 00:16:29.871 "zerocopy_threshold": 0 00:16:29.871 } 00:16:29.871 }, 00:16:29.871 { 00:16:29.871 "method": "sock_impl_set_options", 00:16:29.871 "params": { 00:16:29.871 "enable_ktls": false, 00:16:29.871 "enable_placement_id": 0, 00:16:29.871 "enable_quickack": false, 00:16:29.871 "enable_recv_pipe": true, 00:16:29.871 "enable_zerocopy_send_client": false, 00:16:29.871 "enable_zerocopy_send_server": true, 00:16:29.871 "impl_name": "posix", 00:16:29.871 "recv_buf_size": 2097152, 00:16:29.871 "send_buf_size": 2097152, 00:16:29.871 "tls_version": 0, 00:16:29.871 "zerocopy_threshold": 0 00:16:29.871 } 00:16:29.871 } 00:16:29.871 ] 00:16:29.871 }, 00:16:29.871 { 00:16:29.871 "subsystem": "vmd", 00:16:29.871 "config": [] 00:16:29.871 }, 00:16:29.871 { 00:16:29.871 "subsystem": "accel", 00:16:29.871 "config": [ 00:16:29.871 { 00:16:29.871 "method": "accel_set_options", 00:16:29.871 "params": { 00:16:29.871 "buf_count": 2048, 00:16:29.871 "large_cache_size": 16, 00:16:29.871 "sequence_count": 2048, 00:16:29.871 "small_cache_size": 128, 00:16:29.871 "task_count": 2048 00:16:29.872 } 00:16:29.872 } 00:16:29.872 ] 00:16:29.872 }, 00:16:29.872 { 00:16:29.872 "subsystem": "bdev", 00:16:29.872 "config": [ 00:16:29.872 { 00:16:29.872 "method": "bdev_set_options", 00:16:29.872 "params": { 00:16:29.872 "bdev_auto_examine": true, 00:16:29.872 "bdev_io_cache_size": 256, 00:16:29.872 "bdev_io_pool_size": 65535, 00:16:29.872 "iobuf_large_cache_size": 16, 00:16:29.872 "iobuf_small_cache_size": 128 00:16:29.872 } 00:16:29.872 }, 00:16:29.872 { 00:16:29.872 "method": "bdev_raid_set_options", 00:16:29.872 "params": { 00:16:29.872 "process_window_size_kb": 1024 00:16:29.872 } 00:16:29.872 }, 00:16:29.872 { 00:16:29.872 "method": "bdev_iscsi_set_options", 00:16:29.872 "params": { 00:16:29.872 "timeout_sec": 30 00:16:29.872 } 00:16:29.872 }, 00:16:29.872 { 00:16:29.872 "method": "bdev_nvme_set_options", 00:16:29.872 "params": { 00:16:29.872 "action_on_timeout": "none", 00:16:29.872 "allow_accel_sequence": false, 00:16:29.872 "arbitration_burst": 0, 00:16:29.872 "bdev_retry_count": 3, 00:16:29.872 "ctrlr_loss_timeout_sec": 0, 00:16:29.872 "delay_cmd_submit": true, 00:16:29.872 "dhchap_dhgroups": [ 00:16:29.872 "null", 00:16:29.872 "ffdhe2048", 00:16:29.872 "ffdhe3072", 00:16:29.872 "ffdhe4096", 00:16:29.872 "ffdhe6144", 00:16:29.872 "ffdhe8192" 00:16:29.872 ], 00:16:29.872 "dhchap_digests": [ 00:16:29.872 "sha256", 00:16:29.872 "sha384", 00:16:29.872 "sha512" 00:16:29.872 ], 00:16:29.872 "disable_auto_failback": false, 00:16:29.872 "fast_io_fail_timeout_sec": 0, 00:16:29.872 "generate_uuids": false, 00:16:29.872 "high_priority_weight": 0, 00:16:29.872 "io_path_stat": false, 00:16:29.872 "io_queue_requests": 512, 00:16:29.872 "keep_alive_timeout_ms": 10000, 00:16:29.872 "low_priority_weight": 0, 00:16:29.872 "medium_priority_weight": 0, 00:16:29.872 "nvme_adminq_poll_period_us": 10000, 00:16:29.872 "nvme_error_stat": false, 00:16:29.872 "nvme_ioq_poll_period_us": 0, 00:16:29.872 "rdma_cm_event_timeout_ms": 0, 00:16:29.872 "rdma_max_cq_size": 0, 00:16:29.872 "rdma_srq_size": 0, 00:16:29.872 "reconnect_delay_sec": 0, 00:16:29.872 "timeout_admin_us": 0, 00:16:29.872 "timeout_us": 0, 00:16:29.872 "transport_ack_timeout": 0, 00:16:29.872 "transport_retry_count": 4, 00:16:29.872 "transport_tos": 0 00:16:29.872 } 00:16:29.872 }, 00:16:29.872 { 00:16:29.872 "method": "bdev_nvme_attach_controller", 00:16:29.872 "params": { 00:16:29.872 "adrfam": "IPv4", 00:16:29.872 "ctrlr_loss_timeout_sec": 0, 00:16:29.872 "ddgst": false, 00:16:29.872 "fast_io_fail_timeout_sec": 0, 00:16:29.872 "hdgst": false, 00:16:29.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.872 "name": "TLSTEST", 00:16:29.872 "prchk_guard": false, 00:16:29.872 "prchk_reftag": false, 00:16:29.872 "psk": "/tmp/tmp.GdqwpZOJPt", 00:16:29.872 "reconnect_delay_sec": 0, 00:16:29.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.872 "traddr": "10.0.0.2", 00:16:29.872 "trsvcid": "4420", 00:16:29.872 "trtype": "TCP" 00:16:29.872 } 00:16:29.872 }, 00:16:29.872 { 00:16:29.872 "method": "bdev_nvme_set_hotplug", 00:16:29.872 "params": { 00:16:29.872 "enable": false, 00:16:29.872 "period_us": 100000 00:16:29.872 } 00:16:29.872 }, 00:16:29.872 { 00:16:29.872 "method": "bdev_wait_for_examine" 00:16:29.872 } 00:16:29.872 ] 00:16:29.872 }, 00:16:29.872 { 00:16:29.872 "subsystem": "nbd", 00:16:29.872 "config": [] 00:16:29.872 } 00:16:29.872 ] 00:16:29.872 }' 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 84672 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84672 ']' 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84672 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84672 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:29.872 killing process with pid 84672 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84672' 00:16:29.872 Received shutdown signal, test time was about 10.000000 seconds 00:16:29.872 00:16:29.872 Latency(us) 00:16:29.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.872 =================================================================================================================== 00:16:29.872 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84672 00:16:29.872 [2024-07-15 22:11:16.571538] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84672 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84564 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84564 ']' 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84564 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84564 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:29.872 killing process with pid 84564 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84564' 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84564 00:16:29.872 [2024-07-15 22:11:16.754426] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:29.872 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84564 00:16:30.130 22:11:16 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:30.130 22:11:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.130 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.130 22:11:16 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:16:30.130 "subsystems": [ 00:16:30.130 { 00:16:30.130 "subsystem": "keyring", 00:16:30.130 "config": [] 00:16:30.130 }, 00:16:30.130 { 00:16:30.130 "subsystem": "iobuf", 00:16:30.130 "config": [ 00:16:30.130 { 00:16:30.130 "method": "iobuf_set_options", 00:16:30.130 "params": { 00:16:30.130 "large_bufsize": 135168, 00:16:30.130 "large_pool_count": 1024, 00:16:30.130 "small_bufsize": 8192, 00:16:30.130 "small_pool_count": 8192 00:16:30.130 } 00:16:30.130 } 00:16:30.130 ] 00:16:30.130 }, 00:16:30.130 { 00:16:30.130 "subsystem": "sock", 00:16:30.130 "config": [ 00:16:30.130 { 00:16:30.130 "method": "sock_set_default_impl", 00:16:30.130 "params": { 00:16:30.130 "impl_name": "posix" 00:16:30.130 } 00:16:30.130 }, 00:16:30.130 { 00:16:30.130 "method": "sock_impl_set_options", 00:16:30.130 "params": { 00:16:30.130 "enable_ktls": false, 00:16:30.130 "enable_placement_id": 0, 00:16:30.130 "enable_quickack": false, 00:16:30.130 "enable_recv_pipe": true, 00:16:30.130 "enable_zerocopy_send_client": false, 00:16:30.130 "enable_zerocopy_send_server": true, 00:16:30.130 "impl_name": "ssl", 00:16:30.130 "recv_buf_size": 4096, 00:16:30.130 "send_buf_size": 4096, 00:16:30.130 "tls_version": 0, 00:16:30.130 "zerocopy_threshold": 0 00:16:30.130 } 00:16:30.130 }, 00:16:30.130 { 00:16:30.130 "method": "sock_impl_set_options", 00:16:30.130 "params": { 00:16:30.130 "enable_ktls": false, 00:16:30.130 "enable_placement_id": 0, 00:16:30.130 "enable_quickack": false, 00:16:30.130 "enable_recv_pipe": true, 00:16:30.130 "enable_zerocopy_send_client": false, 00:16:30.130 "enable_zerocopy_send_server": true, 00:16:30.130 "impl_name": "posix", 00:16:30.130 "recv_buf_size": 2097152, 00:16:30.130 "send_buf_size": 2097152, 00:16:30.130 "tls_version": 0, 00:16:30.130 "zerocopy_threshold": 0 00:16:30.130 } 00:16:30.130 } 00:16:30.130 ] 00:16:30.130 }, 00:16:30.130 { 00:16:30.130 "subsystem": "vmd", 00:16:30.130 "config": [] 00:16:30.130 }, 00:16:30.130 { 00:16:30.130 "subsystem": "accel", 00:16:30.130 "config": [ 00:16:30.130 { 00:16:30.130 "method": "accel_set_options", 00:16:30.130 "params": { 00:16:30.130 "buf_count": 2048, 00:16:30.130 "large_cache_size": 16, 00:16:30.130 "sequence_count": 2048, 00:16:30.130 "small_cache_size": 128, 00:16:30.130 "task_count": 2048 00:16:30.130 } 00:16:30.130 } 00:16:30.130 ] 00:16:30.130 }, 00:16:30.130 { 00:16:30.130 "subsystem": "bdev", 00:16:30.130 "config": [ 00:16:30.130 { 00:16:30.130 "method": "bdev_set_options", 00:16:30.130 "params": { 00:16:30.130 "bdev_auto_examine": true, 00:16:30.130 "bdev_io_cache_size": 256, 00:16:30.130 "bdev_io_pool_size": 65535, 00:16:30.130 "iobuf_large_cache_size": 16, 00:16:30.130 "iobuf_small_cache_size": 128 00:16:30.130 } 00:16:30.130 }, 00:16:30.130 { 00:16:30.130 "method": "bdev_raid_set_options", 00:16:30.130 "params": { 00:16:30.130 "process_window_size_kb": 1024 00:16:30.130 } 00:16:30.130 }, 00:16:30.130 { 00:16:30.130 "method": "bdev_iscsi_set_options", 00:16:30.130 "params": { 00:16:30.130 "timeout_sec": 30 00:16:30.130 } 00:16:30.130 }, 00:16:30.130 { 00:16:30.130 "method": "bdev_nvme_set_options", 00:16:30.130 "params": { 00:16:30.130 "action_on_timeout": "none", 00:16:30.130 "allow_accel_sequence": false, 00:16:30.130 "arbitration_burst": 0, 00:16:30.130 "bdev_retry_count": 3, 00:16:30.130 "ctrlr_loss_timeout_sec": 0, 00:16:30.130 "delay_cmd_submit": true, 00:16:30.130 "dhchap_dhgroups": [ 00:16:30.130 "null", 00:16:30.130 "ffdhe2048", 00:16:30.131 "ffdhe3072", 00:16:30.131 "ffdhe4096", 00:16:30.131 "ffdhe6144", 00:16:30.131 "ffdhe8192" 00:16:30.131 ], 00:16:30.131 "dhchap_digests": [ 00:16:30.131 "sha256", 00:16:30.131 "sha384", 00:16:30.131 "sha512" 00:16:30.131 ], 00:16:30.131 "disable_auto_failback": false, 00:16:30.131 "fast_io_fail_timeout_sec": 0, 00:16:30.131 "generate_uuids": false, 00:16:30.131 "high_priority_weight": 0, 00:16:30.131 "io_path_stat": false, 00:16:30.131 "io_queue_requests": 0, 00:16:30.131 "keep_alive_timeout_ms": 10000, 00:16:30.131 "low_priority_weight": 0, 00:16:30.131 "medium_priority_weight": 0, 00:16:30.131 "nvme_adminq_poll_period_us": 10000, 00:16:30.131 "nvme_error_stat": false, 00:16:30.131 "nvme_ioq_poll_period_us": 0, 00:16:30.131 "rdma_cm_event_timeout_ms": 0, 00:16:30.131 "rdma_max_cq_size": 0, 00:16:30.131 "rdma_srq_size": 0, 00:16:30.131 "reconnect_delay_sec": 0, 00:16:30.131 "timeout_admin_us": 0, 00:16:30.131 "timeout_us": 0, 00:16:30.131 "transport_ack_timeout": 0, 00:16:30.131 "transport_retry_count": 4, 00:16:30.131 "transport_tos": 0 00:16:30.131 } 00:16:30.131 }, 00:16:30.131 { 00:16:30.131 "method": "bdev_nvme_set_hotplug", 00:16:30.131 "params": { 00:16:30.131 "enable": false, 00:16:30.131 "period_us": 100000 00:16:30.131 } 00:16:30.131 }, 00:16:30.131 { 00:16:30.131 "method": "bdev_malloc_create", 00:16:30.131 "params": { 00:16:30.131 "block_size": 4096, 00:16:30.131 "name": "malloc0", 00:16:30.131 "num_blocks": 8192, 00:16:30.131 "optimal_io_boundary": 0, 00:16:30.131 "physical_block_size": 4096, 00:16:30.131 "uuid": "3a84251d-2bba-4601-8237-9847b014948f" 00:16:30.131 } 00:16:30.131 }, 00:16:30.131 { 00:16:30.131 "method": "bdev_wait_for_examine" 00:16:30.131 } 00:16:30.131 ] 00:16:30.131 }, 00:16:30.131 { 00:16:30.131 "subsystem": "nbd", 00:16:30.131 "config": [] 00:16:30.131 }, 00:16:30.131 { 00:16:30.131 "subsystem": "scheduler", 00:16:30.131 "config": [ 00:16:30.131 { 00:16:30.131 "method": "framework_set_scheduler", 00:16:30.131 "params": { 00:16:30.131 "name": "static" 00:16:30.131 } 00:16:30.131 } 00:16:30.131 ] 00:16:30.131 }, 00:16:30.131 { 00:16:30.131 "subsystem": "nvmf", 00:16:30.131 "config": [ 00:16:30.131 { 00:16:30.131 "method": "nvmf_set_config", 00:16:30.131 "params": { 00:16:30.131 "admin_cmd_passthru": { 00:16:30.131 "identify_ctrlr": false 00:16:30.131 }, 00:16:30.131 "discovery_filter": "match_any" 00:16:30.131 } 00:16:30.131 }, 00:16:30.131 { 00:16:30.131 "method": "nvmf_set_max_subsystems", 00:16:30.131 "params": { 00:16:30.131 "max_subsystems": 1024 00:16:30.131 } 00:16:30.131 }, 00:16:30.131 { 00:16:30.131 "method": "nvmf_set_crdt", 00:16:30.131 "params": { 00:16:30.131 "crdt1": 0, 00:16:30.131 "crdt2": 0, 00:16:30.131 "crdt3": 0 00:16:30.131 } 00:16:30.131 }, 00:16:30.131 { 00:16:30.131 "method": "nvmf_create_transport", 00:16:30.131 "params": { 00:16:30.131 "abort_timeout_sec": 1, 00:16:30.131 "ack_timeout": 0, 00:16:30.131 "buf_cache_size": 4294967295, 00:16:30.131 "c2h_success": false, 00:16:30.131 "data_wr_pool_size": 0, 00:16:30.131 "dif_insert_or_strip": false, 00:16:30.131 "in_capsule_data_size": 4096, 00:16:30.131 "io_unit_size": 131072, 00:16:30.131 "max_aq_depth": 128, 00:16:30.131 "max_io_qpairs_per_ctrlr": 127, 00:16:30.131 "max_io_size": 131072, 00:16:30.131 "max_queue_depth": 128, 00:16:30.131 "num_shared_buffers": 511, 00:16:30.131 "sock_priority": 0, 00:16:30.131 "trtype": "TCP", 00:16:30.131 "zcopy": false 00:16:30.131 } 00:16:30.131 }, 00:16:30.131 { 00:16:30.131 "method": "nvmf_create_subsystem", 00:16:30.131 "params": { 00:16:30.131 "allow_any_host": false, 00:16:30.131 "ana_reporting": false, 00:16:30.131 "max_cntlid": 65519, 00:16:30.131 "max_namespaces": 10, 00:16:30.131 "min_cntlid": 1, 00:16:30.131 "model_number": "SPDK bdev Controller", 00:16:30.131 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:30.131 "serial_number": "SPDK00000000000001" 00:16:30.131 } 00:16:30.131 }, 00:16:30.131 { 00:16:30.131 "method": "nvmf_subsystem_add_host", 00:16:30.131 "params": { 00:16:30.131 "host": "nqn.2016-06.io.spdk:host1", 00:16:30.131 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:30.131 "psk": "/tmp/tmp.GdqwpZOJPt" 00:16:30.131 } 00:16:30.131 }, 00:16:30.131 { 00:16:30.131 "method": "nvmf_subsystem_add_ns", 00:16:30.131 "params": { 00:16:30.131 "namespace": { 00:16:30.131 "bdev_name": "malloc0", 00:16:30.131 "nguid": "3A84251D2BBA460182379847B014948F", 00:16:30.131 "no_auto_visible": false, 00:16:30.131 "nsid": 1, 00:16:30.131 "uuid": "3a84251d-2bba-4601-8237-9847b014948f" 00:16:30.131 }, 00:16:30.131 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:30.131 } 00:16:30.131 }, 00:16:30.131 { 00:16:30.131 "method": "nvmf_subsystem_add_listener", 00:16:30.131 "params": { 00:16:30.131 "listen_address": { 00:16:30.131 "adrfam": "IPv4", 00:16:30.131 "traddr": "10.0.0.2", 00:16:30.131 "trsvcid": "4420", 00:16:30.131 "trtype": "TCP" 00:16:30.131 }, 00:16:30.131 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:30.131 "secure_channel": true 00:16:30.131 } 00:16:30.131 } 00:16:30.131 ] 00:16:30.131 } 00:16:30.131 ] 00:16:30.131 }' 00:16:30.131 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.131 22:11:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84738 00:16:30.131 22:11:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:30.131 22:11:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84738 00:16:30.131 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84738 ']' 00:16:30.131 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.131 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.131 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.131 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.131 22:11:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.131 [2024-07-15 22:11:17.007883] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:30.131 [2024-07-15 22:11:17.008036] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.388 [2024-07-15 22:11:17.164969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.388 [2024-07-15 22:11:17.253883] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.388 [2024-07-15 22:11:17.253966] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.388 [2024-07-15 22:11:17.253985] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.388 [2024-07-15 22:11:17.253999] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.388 [2024-07-15 22:11:17.254011] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.388 [2024-07-15 22:11:17.254142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.648 [2024-07-15 22:11:17.439912] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.648 [2024-07-15 22:11:17.455850] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:30.648 [2024-07-15 22:11:17.471834] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:30.648 [2024-07-15 22:11:17.472040] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.215 22:11:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.215 22:11:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:31.216 22:11:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:31.216 22:11:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:31.216 22:11:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.216 22:11:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.216 22:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84782 00:16:31.216 22:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84782 /var/tmp/bdevperf.sock 00:16:31.216 22:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84782 ']' 00:16:31.216 22:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.216 22:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.216 22:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.216 22:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:31.216 22:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.216 22:11:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.216 22:11:18 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:16:31.216 "subsystems": [ 00:16:31.216 { 00:16:31.216 "subsystem": "keyring", 00:16:31.216 "config": [] 00:16:31.216 }, 00:16:31.216 { 00:16:31.216 "subsystem": "iobuf", 00:16:31.216 "config": [ 00:16:31.216 { 00:16:31.216 "method": "iobuf_set_options", 00:16:31.216 "params": { 00:16:31.216 "large_bufsize": 135168, 00:16:31.216 "large_pool_count": 1024, 00:16:31.216 "small_bufsize": 8192, 00:16:31.216 "small_pool_count": 8192 00:16:31.216 } 00:16:31.216 } 00:16:31.216 ] 00:16:31.216 }, 00:16:31.216 { 00:16:31.216 "subsystem": "sock", 00:16:31.216 "config": [ 00:16:31.216 { 00:16:31.216 "method": "sock_set_default_impl", 00:16:31.216 "params": { 00:16:31.216 "impl_name": "posix" 00:16:31.216 } 00:16:31.216 }, 00:16:31.216 { 00:16:31.216 "method": "sock_impl_set_options", 00:16:31.216 "params": { 00:16:31.216 "enable_ktls": false, 00:16:31.216 "enable_placement_id": 0, 00:16:31.216 "enable_quickack": false, 00:16:31.216 "enable_recv_pipe": true, 00:16:31.216 "enable_zerocopy_send_client": false, 00:16:31.216 "enable_zerocopy_send_server": true, 00:16:31.216 "impl_name": "ssl", 00:16:31.216 "recv_buf_size": 4096, 00:16:31.216 "send_buf_size": 4096, 00:16:31.216 "tls_version": 0, 00:16:31.216 "zerocopy_threshold": 0 00:16:31.216 } 00:16:31.216 }, 00:16:31.216 { 00:16:31.216 "method": "sock_impl_set_options", 00:16:31.216 "params": { 00:16:31.216 "enable_ktls": false, 00:16:31.216 "enable_placement_id": 0, 00:16:31.216 "enable_quickack": false, 00:16:31.216 "enable_recv_pipe": true, 00:16:31.216 "enable_zerocopy_send_client": false, 00:16:31.216 "enable_zerocopy_send_server": true, 00:16:31.216 "impl_name": "posix", 00:16:31.216 "recv_buf_size": 2097152, 00:16:31.216 "send_buf_size": 2097152, 00:16:31.216 "tls_version": 0, 00:16:31.216 "zerocopy_threshold": 0 00:16:31.216 } 00:16:31.216 } 00:16:31.216 ] 00:16:31.216 }, 00:16:31.216 { 00:16:31.216 "subsystem": "vmd", 00:16:31.216 "config": [] 00:16:31.216 }, 00:16:31.216 { 00:16:31.216 "subsystem": "accel", 00:16:31.216 "config": [ 00:16:31.216 { 00:16:31.216 "method": "accel_set_options", 00:16:31.216 "params": { 00:16:31.216 "buf_count": 2048, 00:16:31.216 "large_cache_size": 16, 00:16:31.216 "sequence_count": 2048, 00:16:31.216 "small_cache_size": 128, 00:16:31.216 "task_count": 2048 00:16:31.216 } 00:16:31.216 } 00:16:31.216 ] 00:16:31.216 }, 00:16:31.216 { 00:16:31.216 "subsystem": "bdev", 00:16:31.216 "config": [ 00:16:31.216 { 00:16:31.216 "method": "bdev_set_options", 00:16:31.216 "params": { 00:16:31.216 "bdev_auto_examine": true, 00:16:31.216 "bdev_io_cache_size": 256, 00:16:31.216 "bdev_io_pool_size": 65535, 00:16:31.216 "iobuf_large_cache_size": 16, 00:16:31.216 "iobuf_small_cache_size": 128 00:16:31.216 } 00:16:31.216 }, 00:16:31.216 { 00:16:31.216 "method": "bdev_raid_set_options", 00:16:31.216 "params": { 00:16:31.216 "process_window_size_kb": 1024 00:16:31.216 } 00:16:31.216 }, 00:16:31.216 { 00:16:31.216 "method": "bdev_iscsi_set_options", 00:16:31.216 "params": { 00:16:31.216 "timeout_sec": 30 00:16:31.216 } 00:16:31.216 }, 00:16:31.216 { 00:16:31.216 "method": "bdev_nvme_set_options", 00:16:31.216 "params": { 00:16:31.216 "action_on_timeout": "none", 00:16:31.216 "allow_accel_sequence": false, 00:16:31.216 "arbitration_burst": 0, 00:16:31.216 "bdev_retry_count": 3, 00:16:31.216 "ctrlr_loss_timeout_sec": 0, 00:16:31.216 "delay_cmd_submit": true, 00:16:31.216 "dhchap_dhgroups": [ 00:16:31.216 "null", 00:16:31.216 "ffdhe2048", 00:16:31.216 "ffdhe3072", 00:16:31.216 "ffdhe4096", 00:16:31.216 "ffdhe6144", 00:16:31.216 "ffdhe8192" 00:16:31.216 ], 00:16:31.216 "dhchap_digests": [ 00:16:31.216 "sha256", 00:16:31.216 "sha384", 00:16:31.216 "sha512" 00:16:31.216 ], 00:16:31.216 "disable_auto_failback": false, 00:16:31.216 "fast_io_fail_timeout_sec": 0, 00:16:31.216 "generate_uuids": false, 00:16:31.216 "high_priority_weight": 0, 00:16:31.216 "io_path_stat": false, 00:16:31.216 "io_queue_requests": 512, 00:16:31.216 "keep_alive_timeout_ms": 10000, 00:16:31.216 "low_priority_weight": 0, 00:16:31.216 "medium_priority_weight": 0, 00:16:31.216 "nvme_adminq_poll_period_us": 10000, 00:16:31.216 "nvme_error_stat": false, 00:16:31.216 "nvme_ioq_poll_period_us": 0, 00:16:31.216 "rdma_cm_event_timeout_ms": 0, 00:16:31.216 "rdma_max_cq_size": 0, 00:16:31.216 "rdma_srq_size": 0, 00:16:31.216 "reconnect_delay_sec": 0, 00:16:31.216 "timeout_admin_us": 0, 00:16:31.216 "timeout_us": 0, 00:16:31.216 "transport_ack_timeout": 0, 00:16:31.216 "transport_retry_count": 4, 00:16:31.216 "transport_tos": 0 00:16:31.216 } 00:16:31.216 }, 00:16:31.216 { 00:16:31.216 "method": "bdev_nvme_attach_controller", 00:16:31.216 "params": { 00:16:31.216 "adrfam": "IPv4", 00:16:31.217 "ctrlr_loss_timeout_sec": 0, 00:16:31.217 "ddgst": false, 00:16:31.217 "fast_io_fail_timeout_sec": 0, 00:16:31.217 "hdgst": false, 00:16:31.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:31.217 "name": "TLSTEST", 00:16:31.217 "prchk_guard": false, 00:16:31.217 "prchk_reftag": false, 00:16:31.217 "psk": "/tmp/tmp.GdqwpZOJPt", 00:16:31.217 "reconnect_delay_sec": 0, 00:16:31.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.217 "traddr": "10.0.0.2", 00:16:31.217 "trsvcid": "4420", 00:16:31.217 "trtype": "TCP" 00:16:31.217 } 00:16:31.217 }, 00:16:31.217 { 00:16:31.217 "method": "bdev_nvme_set_hotplug", 00:16:31.217 "params": { 00:16:31.217 "enable": false, 00:16:31.217 "period_us": 100000 00:16:31.217 } 00:16:31.217 }, 00:16:31.217 { 00:16:31.217 "method": "bdev_wait_for_examine" 00:16:31.217 } 00:16:31.217 ] 00:16:31.217 }, 00:16:31.217 { 00:16:31.217 "subsystem": "nbd", 00:16:31.217 "config": [] 00:16:31.217 } 00:16:31.217 ] 00:16:31.217 }' 00:16:31.217 [2024-07-15 22:11:18.081317] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:31.217 [2024-07-15 22:11:18.081433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84782 ] 00:16:31.475 [2024-07-15 22:11:18.225420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.475 [2024-07-15 22:11:18.300053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.733 [2024-07-15 22:11:18.431728] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:31.733 [2024-07-15 22:11:18.431860] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:32.299 22:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:32.299 22:11:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:32.299 22:11:19 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:32.299 Running I/O for 10 seconds... 00:16:42.262 00:16:42.262 Latency(us) 00:16:42.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.262 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:42.262 Verification LBA range: start 0x0 length 0x2000 00:16:42.262 TLSTESTn1 : 10.01 3487.64 13.62 0.00 0.00 36638.70 4915.20 38606.66 00:16:42.262 =================================================================================================================== 00:16:42.262 Total : 3487.64 13.62 0.00 0.00 36638.70 4915.20 38606.66 00:16:42.262 0 00:16:42.520 22:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:42.520 22:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 84782 00:16:42.520 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84782 ']' 00:16:42.520 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84782 00:16:42.520 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:42.520 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:42.520 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84782 00:16:42.520 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:42.520 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:42.520 killing process with pid 84782 00:16:42.520 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84782' 00:16:42.520 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84782 00:16:42.520 Received shutdown signal, test time was about 10.000000 seconds 00:16:42.520 00:16:42.520 Latency(us) 00:16:42.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.520 =================================================================================================================== 00:16:42.521 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.521 [2024-07-15 22:11:29.235767] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:42.521 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84782 00:16:42.521 22:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 84738 00:16:42.521 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84738 ']' 00:16:42.521 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84738 00:16:42.521 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:42.521 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:42.521 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84738 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:42.779 killing process with pid 84738 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84738' 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84738 00:16:42.779 [2024-07-15 22:11:29.479602] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84738 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84937 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84937 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84937 ']' 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:42.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:42.779 22:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:43.037 [2024-07-15 22:11:29.779988] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:43.037 [2024-07-15 22:11:29.780149] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.037 [2024-07-15 22:11:29.926205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.296 [2024-07-15 22:11:30.002855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.296 [2024-07-15 22:11:30.002948] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.296 [2024-07-15 22:11:30.002969] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.296 [2024-07-15 22:11:30.002985] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.296 [2024-07-15 22:11:30.002998] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.296 [2024-07-15 22:11:30.003040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.230 22:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.230 22:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:44.230 22:11:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:44.230 22:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:44.230 22:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.230 22:11:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.230 22:11:30 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.GdqwpZOJPt 00:16:44.230 22:11:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GdqwpZOJPt 00:16:44.230 22:11:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:44.488 [2024-07-15 22:11:31.222188] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.488 22:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:44.746 22:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:45.004 [2024-07-15 22:11:31.762393] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:45.004 [2024-07-15 22:11:31.762898] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.004 22:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:45.262 malloc0 00:16:45.262 22:11:32 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:45.520 22:11:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GdqwpZOJPt 00:16:45.778 [2024-07-15 22:11:32.619189] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:45.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:45.778 22:11:32 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85041 00:16:45.778 22:11:32 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:45.778 22:11:32 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:45.778 22:11:32 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85041 /var/tmp/bdevperf.sock 00:16:45.778 22:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85041 ']' 00:16:45.778 22:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:45.778 22:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.778 22:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:45.778 22:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.778 22:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.778 [2024-07-15 22:11:32.693368] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:45.778 [2024-07-15 22:11:32.693471] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85041 ] 00:16:46.036 [2024-07-15 22:11:32.839348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.036 [2024-07-15 22:11:32.928650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.294 22:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.294 22:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:46.294 22:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GdqwpZOJPt 00:16:46.604 22:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:46.862 [2024-07-15 22:11:33.653124] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:46.862 nvme0n1 00:16:46.862 22:11:33 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:47.120 Running I/O for 1 seconds... 00:16:48.053 00:16:48.053 Latency(us) 00:16:48.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.053 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:48.053 Verification LBA range: start 0x0 length 0x2000 00:16:48.053 nvme0n1 : 1.02 3139.15 12.26 0.00 0.00 40452.02 7387.69 34793.66 00:16:48.053 =================================================================================================================== 00:16:48.053 Total : 3139.15 12.26 0.00 0.00 40452.02 7387.69 34793.66 00:16:48.053 0 00:16:48.053 22:11:34 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 85041 00:16:48.053 22:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85041 ']' 00:16:48.053 22:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85041 00:16:48.053 22:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:48.053 22:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:48.053 22:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85041 00:16:48.053 killing process with pid 85041 00:16:48.053 Received shutdown signal, test time was about 1.000000 seconds 00:16:48.053 00:16:48.053 Latency(us) 00:16:48.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.053 =================================================================================================================== 00:16:48.053 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:48.053 22:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:48.053 22:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:48.053 22:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85041' 00:16:48.053 22:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85041 00:16:48.053 22:11:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85041 00:16:48.311 22:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 84937 00:16:48.311 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84937 ']' 00:16:48.311 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84937 00:16:48.311 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:48.311 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:48.311 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84937 00:16:48.311 killing process with pid 84937 00:16:48.311 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:48.311 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:48.311 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84937' 00:16:48.311 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84937 00:16:48.311 [2024-07-15 22:11:35.128965] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:48.311 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84937 00:16:48.569 22:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:16:48.569 22:11:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:48.569 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.569 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:48.569 22:11:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85097 00:16:48.569 22:11:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:48.569 22:11:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85097 00:16:48.569 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85097 ']' 00:16:48.569 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.569 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.569 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.569 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.569 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:48.569 [2024-07-15 22:11:35.362711] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:48.569 [2024-07-15 22:11:35.362807] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.569 [2024-07-15 22:11:35.496535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.827 [2024-07-15 22:11:35.574810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.827 [2024-07-15 22:11:35.574877] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.827 [2024-07-15 22:11:35.574896] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.827 [2024-07-15 22:11:35.574910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.827 [2024-07-15 22:11:35.574922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.827 [2024-07-15 22:11:35.574960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.827 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.827 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:48.827 22:11:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:48.827 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:48.827 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:48.827 22:11:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.827 22:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:16:48.827 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.827 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:48.827 [2024-07-15 22:11:35.710619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.827 malloc0 00:16:48.827 [2024-07-15 22:11:35.741348] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:48.827 [2024-07-15 22:11:35.741547] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.827 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.086 22:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=85138 00:16:49.086 22:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:49.086 22:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 85138 /var/tmp/bdevperf.sock 00:16:49.086 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85138 ']' 00:16:49.086 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:49.086 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:49.086 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:49.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:49.086 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:49.086 22:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.086 [2024-07-15 22:11:35.837741] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:49.086 [2024-07-15 22:11:35.837858] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85138 ] 00:16:49.086 [2024-07-15 22:11:35.978431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.344 [2024-07-15 22:11:36.037967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.344 22:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.344 22:11:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:49.344 22:11:36 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GdqwpZOJPt 00:16:49.601 22:11:36 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:49.859 [2024-07-15 22:11:36.620496] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:49.859 nvme0n1 00:16:49.859 22:11:36 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:50.116 Running I/O for 1 seconds... 00:16:51.050 00:16:51.050 Latency(us) 00:16:51.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.050 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:51.050 Verification LBA range: start 0x0 length 0x2000 00:16:51.050 nvme0n1 : 1.02 3877.30 15.15 0.00 0.00 32662.66 7328.12 26333.56 00:16:51.050 =================================================================================================================== 00:16:51.050 Total : 3877.30 15.15 0.00 0.00 32662.66 7328.12 26333.56 00:16:51.050 0 00:16:51.050 22:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:16:51.050 22:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.050 22:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.050 22:11:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.050 22:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:16:51.050 "subsystems": [ 00:16:51.050 { 00:16:51.050 "subsystem": "keyring", 00:16:51.050 "config": [ 00:16:51.050 { 00:16:51.050 "method": "keyring_file_add_key", 00:16:51.050 "params": { 00:16:51.050 "name": "key0", 00:16:51.050 "path": "/tmp/tmp.GdqwpZOJPt" 00:16:51.050 } 00:16:51.050 } 00:16:51.050 ] 00:16:51.050 }, 00:16:51.050 { 00:16:51.050 "subsystem": "iobuf", 00:16:51.050 "config": [ 00:16:51.050 { 00:16:51.050 "method": "iobuf_set_options", 00:16:51.050 "params": { 00:16:51.050 "large_bufsize": 135168, 00:16:51.050 "large_pool_count": 1024, 00:16:51.050 "small_bufsize": 8192, 00:16:51.050 "small_pool_count": 8192 00:16:51.050 } 00:16:51.050 } 00:16:51.050 ] 00:16:51.050 }, 00:16:51.050 { 00:16:51.050 "subsystem": "sock", 00:16:51.050 "config": [ 00:16:51.050 { 00:16:51.050 "method": "sock_set_default_impl", 00:16:51.050 "params": { 00:16:51.050 "impl_name": "posix" 00:16:51.050 } 00:16:51.050 }, 00:16:51.050 { 00:16:51.050 "method": "sock_impl_set_options", 00:16:51.050 "params": { 00:16:51.050 "enable_ktls": false, 00:16:51.050 "enable_placement_id": 0, 00:16:51.050 "enable_quickack": false, 00:16:51.050 "enable_recv_pipe": true, 00:16:51.050 "enable_zerocopy_send_client": false, 00:16:51.050 "enable_zerocopy_send_server": true, 00:16:51.050 "impl_name": "ssl", 00:16:51.050 "recv_buf_size": 4096, 00:16:51.050 "send_buf_size": 4096, 00:16:51.050 "tls_version": 0, 00:16:51.050 "zerocopy_threshold": 0 00:16:51.050 } 00:16:51.050 }, 00:16:51.050 { 00:16:51.050 "method": "sock_impl_set_options", 00:16:51.050 "params": { 00:16:51.050 "enable_ktls": false, 00:16:51.050 "enable_placement_id": 0, 00:16:51.050 "enable_quickack": false, 00:16:51.050 "enable_recv_pipe": true, 00:16:51.050 "enable_zerocopy_send_client": false, 00:16:51.050 "enable_zerocopy_send_server": true, 00:16:51.050 "impl_name": "posix", 00:16:51.050 "recv_buf_size": 2097152, 00:16:51.050 "send_buf_size": 2097152, 00:16:51.050 "tls_version": 0, 00:16:51.050 "zerocopy_threshold": 0 00:16:51.050 } 00:16:51.050 } 00:16:51.050 ] 00:16:51.050 }, 00:16:51.050 { 00:16:51.050 "subsystem": "vmd", 00:16:51.050 "config": [] 00:16:51.050 }, 00:16:51.050 { 00:16:51.050 "subsystem": "accel", 00:16:51.050 "config": [ 00:16:51.050 { 00:16:51.050 "method": "accel_set_options", 00:16:51.050 "params": { 00:16:51.050 "buf_count": 2048, 00:16:51.050 "large_cache_size": 16, 00:16:51.050 "sequence_count": 2048, 00:16:51.050 "small_cache_size": 128, 00:16:51.050 "task_count": 2048 00:16:51.050 } 00:16:51.050 } 00:16:51.050 ] 00:16:51.050 }, 00:16:51.050 { 00:16:51.050 "subsystem": "bdev", 00:16:51.050 "config": [ 00:16:51.050 { 00:16:51.050 "method": "bdev_set_options", 00:16:51.050 "params": { 00:16:51.050 "bdev_auto_examine": true, 00:16:51.050 "bdev_io_cache_size": 256, 00:16:51.050 "bdev_io_pool_size": 65535, 00:16:51.050 "iobuf_large_cache_size": 16, 00:16:51.050 "iobuf_small_cache_size": 128 00:16:51.050 } 00:16:51.050 }, 00:16:51.050 { 00:16:51.050 "method": "bdev_raid_set_options", 00:16:51.050 "params": { 00:16:51.050 "process_window_size_kb": 1024 00:16:51.050 } 00:16:51.050 }, 00:16:51.050 { 00:16:51.050 "method": "bdev_iscsi_set_options", 00:16:51.050 "params": { 00:16:51.050 "timeout_sec": 30 00:16:51.050 } 00:16:51.050 }, 00:16:51.050 { 00:16:51.050 "method": "bdev_nvme_set_options", 00:16:51.050 "params": { 00:16:51.050 "action_on_timeout": "none", 00:16:51.050 "allow_accel_sequence": false, 00:16:51.050 "arbitration_burst": 0, 00:16:51.051 "bdev_retry_count": 3, 00:16:51.051 "ctrlr_loss_timeout_sec": 0, 00:16:51.051 "delay_cmd_submit": true, 00:16:51.051 "dhchap_dhgroups": [ 00:16:51.051 "null", 00:16:51.051 "ffdhe2048", 00:16:51.051 "ffdhe3072", 00:16:51.051 "ffdhe4096", 00:16:51.051 "ffdhe6144", 00:16:51.051 "ffdhe8192" 00:16:51.051 ], 00:16:51.051 "dhchap_digests": [ 00:16:51.051 "sha256", 00:16:51.051 "sha384", 00:16:51.051 "sha512" 00:16:51.051 ], 00:16:51.051 "disable_auto_failback": false, 00:16:51.051 "fast_io_fail_timeout_sec": 0, 00:16:51.051 "generate_uuids": false, 00:16:51.051 "high_priority_weight": 0, 00:16:51.051 "io_path_stat": false, 00:16:51.051 "io_queue_requests": 0, 00:16:51.051 "keep_alive_timeout_ms": 10000, 00:16:51.051 "low_priority_weight": 0, 00:16:51.051 "medium_priority_weight": 0, 00:16:51.051 "nvme_adminq_poll_period_us": 10000, 00:16:51.051 "nvme_error_stat": false, 00:16:51.051 "nvme_ioq_poll_period_us": 0, 00:16:51.051 "rdma_cm_event_timeout_ms": 0, 00:16:51.051 "rdma_max_cq_size": 0, 00:16:51.051 "rdma_srq_size": 0, 00:16:51.051 "reconnect_delay_sec": 0, 00:16:51.051 "timeout_admin_us": 0, 00:16:51.051 "timeout_us": 0, 00:16:51.051 "transport_ack_timeout": 0, 00:16:51.051 "transport_retry_count": 4, 00:16:51.051 "transport_tos": 0 00:16:51.051 } 00:16:51.051 }, 00:16:51.051 { 00:16:51.051 "method": "bdev_nvme_set_hotplug", 00:16:51.051 "params": { 00:16:51.051 "enable": false, 00:16:51.051 "period_us": 100000 00:16:51.051 } 00:16:51.051 }, 00:16:51.051 { 00:16:51.051 "method": "bdev_malloc_create", 00:16:51.051 "params": { 00:16:51.051 "block_size": 4096, 00:16:51.051 "name": "malloc0", 00:16:51.051 "num_blocks": 8192, 00:16:51.051 "optimal_io_boundary": 0, 00:16:51.051 "physical_block_size": 4096, 00:16:51.051 "uuid": "1b9affe7-9ba4-4755-9ae5-12ea688741cf" 00:16:51.051 } 00:16:51.051 }, 00:16:51.051 { 00:16:51.051 "method": "bdev_wait_for_examine" 00:16:51.051 } 00:16:51.051 ] 00:16:51.051 }, 00:16:51.051 { 00:16:51.051 "subsystem": "nbd", 00:16:51.051 "config": [] 00:16:51.051 }, 00:16:51.051 { 00:16:51.051 "subsystem": "scheduler", 00:16:51.051 "config": [ 00:16:51.051 { 00:16:51.051 "method": "framework_set_scheduler", 00:16:51.051 "params": { 00:16:51.051 "name": "static" 00:16:51.051 } 00:16:51.051 } 00:16:51.051 ] 00:16:51.051 }, 00:16:51.051 { 00:16:51.051 "subsystem": "nvmf", 00:16:51.051 "config": [ 00:16:51.051 { 00:16:51.051 "method": "nvmf_set_config", 00:16:51.051 "params": { 00:16:51.051 "admin_cmd_passthru": { 00:16:51.051 "identify_ctrlr": false 00:16:51.051 }, 00:16:51.051 "discovery_filter": "match_any" 00:16:51.051 } 00:16:51.051 }, 00:16:51.051 { 00:16:51.051 "method": "nvmf_set_max_subsystems", 00:16:51.051 "params": { 00:16:51.051 "max_subsystems": 1024 00:16:51.051 } 00:16:51.051 }, 00:16:51.051 { 00:16:51.051 "method": "nvmf_set_crdt", 00:16:51.051 "params": { 00:16:51.051 "crdt1": 0, 00:16:51.051 "crdt2": 0, 00:16:51.051 "crdt3": 0 00:16:51.051 } 00:16:51.051 }, 00:16:51.051 { 00:16:51.051 "method": "nvmf_create_transport", 00:16:51.051 "params": { 00:16:51.051 "abort_timeout_sec": 1, 00:16:51.051 "ack_timeout": 0, 00:16:51.051 "buf_cache_size": 4294967295, 00:16:51.051 "c2h_success": false, 00:16:51.051 "data_wr_pool_size": 0, 00:16:51.051 "dif_insert_or_strip": false, 00:16:51.051 "in_capsule_data_size": 4096, 00:16:51.051 "io_unit_size": 131072, 00:16:51.051 "max_aq_depth": 128, 00:16:51.051 "max_io_qpairs_per_ctrlr": 127, 00:16:51.051 "max_io_size": 131072, 00:16:51.051 "max_queue_depth": 128, 00:16:51.051 "num_shared_buffers": 511, 00:16:51.051 "sock_priority": 0, 00:16:51.051 "trtype": "TCP", 00:16:51.051 "zcopy": false 00:16:51.051 } 00:16:51.051 }, 00:16:51.051 { 00:16:51.051 "method": "nvmf_create_subsystem", 00:16:51.051 "params": { 00:16:51.051 "allow_any_host": false, 00:16:51.051 "ana_reporting": false, 00:16:51.051 "max_cntlid": 65519, 00:16:51.051 "max_namespaces": 32, 00:16:51.051 "min_cntlid": 1, 00:16:51.051 "model_number": "SPDK bdev Controller", 00:16:51.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.051 "serial_number": "00000000000000000000" 00:16:51.051 } 00:16:51.051 }, 00:16:51.051 { 00:16:51.051 "method": "nvmf_subsystem_add_host", 00:16:51.051 "params": { 00:16:51.051 "host": "nqn.2016-06.io.spdk:host1", 00:16:51.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.051 "psk": "key0" 00:16:51.051 } 00:16:51.051 }, 00:16:51.051 { 00:16:51.051 "method": "nvmf_subsystem_add_ns", 00:16:51.051 "params": { 00:16:51.051 "namespace": { 00:16:51.051 "bdev_name": "malloc0", 00:16:51.051 "nguid": "1B9AFFE79BA447559AE512EA688741CF", 00:16:51.051 "no_auto_visible": false, 00:16:51.051 "nsid": 1, 00:16:51.051 "uuid": "1b9affe7-9ba4-4755-9ae5-12ea688741cf" 00:16:51.051 }, 00:16:51.051 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:51.051 } 00:16:51.051 }, 00:16:51.051 { 00:16:51.051 "method": "nvmf_subsystem_add_listener", 00:16:51.051 "params": { 00:16:51.051 "listen_address": { 00:16:51.051 "adrfam": "IPv4", 00:16:51.051 "traddr": "10.0.0.2", 00:16:51.051 "trsvcid": "4420", 00:16:51.051 "trtype": "TCP" 00:16:51.051 }, 00:16:51.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.051 "secure_channel": false, 00:16:51.051 "sock_impl": "ssl" 00:16:51.051 } 00:16:51.051 } 00:16:51.051 ] 00:16:51.051 } 00:16:51.051 ] 00:16:51.051 }' 00:16:51.051 22:11:37 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:51.618 22:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:16:51.618 "subsystems": [ 00:16:51.618 { 00:16:51.618 "subsystem": "keyring", 00:16:51.618 "config": [ 00:16:51.618 { 00:16:51.618 "method": "keyring_file_add_key", 00:16:51.618 "params": { 00:16:51.618 "name": "key0", 00:16:51.618 "path": "/tmp/tmp.GdqwpZOJPt" 00:16:51.618 } 00:16:51.618 } 00:16:51.618 ] 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "subsystem": "iobuf", 00:16:51.618 "config": [ 00:16:51.618 { 00:16:51.618 "method": "iobuf_set_options", 00:16:51.618 "params": { 00:16:51.618 "large_bufsize": 135168, 00:16:51.618 "large_pool_count": 1024, 00:16:51.618 "small_bufsize": 8192, 00:16:51.618 "small_pool_count": 8192 00:16:51.618 } 00:16:51.618 } 00:16:51.618 ] 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "subsystem": "sock", 00:16:51.618 "config": [ 00:16:51.618 { 00:16:51.618 "method": "sock_set_default_impl", 00:16:51.618 "params": { 00:16:51.618 "impl_name": "posix" 00:16:51.618 } 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "method": "sock_impl_set_options", 00:16:51.618 "params": { 00:16:51.618 "enable_ktls": false, 00:16:51.618 "enable_placement_id": 0, 00:16:51.618 "enable_quickack": false, 00:16:51.618 "enable_recv_pipe": true, 00:16:51.618 "enable_zerocopy_send_client": false, 00:16:51.618 "enable_zerocopy_send_server": true, 00:16:51.618 "impl_name": "ssl", 00:16:51.618 "recv_buf_size": 4096, 00:16:51.618 "send_buf_size": 4096, 00:16:51.618 "tls_version": 0, 00:16:51.618 "zerocopy_threshold": 0 00:16:51.618 } 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "method": "sock_impl_set_options", 00:16:51.618 "params": { 00:16:51.618 "enable_ktls": false, 00:16:51.618 "enable_placement_id": 0, 00:16:51.618 "enable_quickack": false, 00:16:51.618 "enable_recv_pipe": true, 00:16:51.618 "enable_zerocopy_send_client": false, 00:16:51.618 "enable_zerocopy_send_server": true, 00:16:51.618 "impl_name": "posix", 00:16:51.618 "recv_buf_size": 2097152, 00:16:51.618 "send_buf_size": 2097152, 00:16:51.618 "tls_version": 0, 00:16:51.618 "zerocopy_threshold": 0 00:16:51.618 } 00:16:51.618 } 00:16:51.618 ] 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "subsystem": "vmd", 00:16:51.618 "config": [] 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "subsystem": "accel", 00:16:51.618 "config": [ 00:16:51.618 { 00:16:51.618 "method": "accel_set_options", 00:16:51.618 "params": { 00:16:51.618 "buf_count": 2048, 00:16:51.618 "large_cache_size": 16, 00:16:51.618 "sequence_count": 2048, 00:16:51.618 "small_cache_size": 128, 00:16:51.618 "task_count": 2048 00:16:51.618 } 00:16:51.618 } 00:16:51.618 ] 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "subsystem": "bdev", 00:16:51.618 "config": [ 00:16:51.618 { 00:16:51.618 "method": "bdev_set_options", 00:16:51.618 "params": { 00:16:51.618 "bdev_auto_examine": true, 00:16:51.618 "bdev_io_cache_size": 256, 00:16:51.618 "bdev_io_pool_size": 65535, 00:16:51.618 "iobuf_large_cache_size": 16, 00:16:51.618 "iobuf_small_cache_size": 128 00:16:51.618 } 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "method": "bdev_raid_set_options", 00:16:51.618 "params": { 00:16:51.618 "process_window_size_kb": 1024 00:16:51.618 } 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "method": "bdev_iscsi_set_options", 00:16:51.618 "params": { 00:16:51.618 "timeout_sec": 30 00:16:51.618 } 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "method": "bdev_nvme_set_options", 00:16:51.618 "params": { 00:16:51.618 "action_on_timeout": "none", 00:16:51.618 "allow_accel_sequence": false, 00:16:51.618 "arbitration_burst": 0, 00:16:51.618 "bdev_retry_count": 3, 00:16:51.618 "ctrlr_loss_timeout_sec": 0, 00:16:51.618 "delay_cmd_submit": true, 00:16:51.618 "dhchap_dhgroups": [ 00:16:51.618 "null", 00:16:51.618 "ffdhe2048", 00:16:51.618 "ffdhe3072", 00:16:51.618 "ffdhe4096", 00:16:51.618 "ffdhe6144", 00:16:51.618 "ffdhe8192" 00:16:51.618 ], 00:16:51.618 "dhchap_digests": [ 00:16:51.618 "sha256", 00:16:51.618 "sha384", 00:16:51.618 "sha512" 00:16:51.618 ], 00:16:51.618 "disable_auto_failback": false, 00:16:51.618 "fast_io_fail_timeout_sec": 0, 00:16:51.618 "generate_uuids": false, 00:16:51.618 "high_priority_weight": 0, 00:16:51.618 "io_path_stat": false, 00:16:51.618 "io_queue_requests": 512, 00:16:51.618 "keep_alive_timeout_ms": 10000, 00:16:51.618 "low_priority_weight": 0, 00:16:51.618 "medium_priority_weight": 0, 00:16:51.618 "nvme_adminq_poll_period_us": 10000, 00:16:51.618 "nvme_error_stat": false, 00:16:51.618 "nvme_ioq_poll_period_us": 0, 00:16:51.618 "rdma_cm_event_timeout_ms": 0, 00:16:51.618 "rdma_max_cq_size": 0, 00:16:51.618 "rdma_srq_size": 0, 00:16:51.618 "reconnect_delay_sec": 0, 00:16:51.618 "timeout_admin_us": 0, 00:16:51.618 "timeout_us": 0, 00:16:51.618 "transport_ack_timeout": 0, 00:16:51.618 "transport_retry_count": 4, 00:16:51.618 "transport_tos": 0 00:16:51.618 } 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "method": "bdev_nvme_attach_controller", 00:16:51.618 "params": { 00:16:51.618 "adrfam": "IPv4", 00:16:51.618 "ctrlr_loss_timeout_sec": 0, 00:16:51.618 "ddgst": false, 00:16:51.618 "fast_io_fail_timeout_sec": 0, 00:16:51.618 "hdgst": false, 00:16:51.618 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:51.618 "name": "nvme0", 00:16:51.618 "prchk_guard": false, 00:16:51.618 "prchk_reftag": false, 00:16:51.618 "psk": "key0", 00:16:51.618 "reconnect_delay_sec": 0, 00:16:51.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.618 "traddr": "10.0.0.2", 00:16:51.618 "trsvcid": "4420", 00:16:51.618 "trtype": "TCP" 00:16:51.618 } 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "method": "bdev_nvme_set_hotplug", 00:16:51.618 "params": { 00:16:51.618 "enable": false, 00:16:51.618 "period_us": 100000 00:16:51.618 } 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "method": "bdev_enable_histogram", 00:16:51.618 "params": { 00:16:51.618 "enable": true, 00:16:51.618 "name": "nvme0n1" 00:16:51.618 } 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "method": "bdev_wait_for_examine" 00:16:51.618 } 00:16:51.618 ] 00:16:51.618 }, 00:16:51.618 { 00:16:51.618 "subsystem": "nbd", 00:16:51.618 "config": [] 00:16:51.618 } 00:16:51.618 ] 00:16:51.619 }' 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 85138 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85138 ']' 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85138 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85138 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:51.619 killing process with pid 85138 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85138' 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85138 00:16:51.619 Received shutdown signal, test time was about 1.000000 seconds 00:16:51.619 00:16:51.619 Latency(us) 00:16:51.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.619 =================================================================================================================== 00:16:51.619 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85138 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 85097 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85097 ']' 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85097 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85097 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85097' 00:16:51.619 killing process with pid 85097 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85097 00:16:51.619 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85097 00:16:51.877 22:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:16:51.877 22:11:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:51.877 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.877 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.877 22:11:38 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:16:51.877 "subsystems": [ 00:16:51.877 { 00:16:51.877 "subsystem": "keyring", 00:16:51.877 "config": [ 00:16:51.877 { 00:16:51.877 "method": "keyring_file_add_key", 00:16:51.877 "params": { 00:16:51.877 "name": "key0", 00:16:51.877 "path": "/tmp/tmp.GdqwpZOJPt" 00:16:51.877 } 00:16:51.877 } 00:16:51.877 ] 00:16:51.877 }, 00:16:51.877 { 00:16:51.877 "subsystem": "iobuf", 00:16:51.877 "config": [ 00:16:51.877 { 00:16:51.877 "method": "iobuf_set_options", 00:16:51.877 "params": { 00:16:51.877 "large_bufsize": 135168, 00:16:51.877 "large_pool_count": 1024, 00:16:51.877 "small_bufsize": 8192, 00:16:51.877 "small_pool_count": 8192 00:16:51.877 } 00:16:51.877 } 00:16:51.877 ] 00:16:51.877 }, 00:16:51.877 { 00:16:51.877 "subsystem": "sock", 00:16:51.877 "config": [ 00:16:51.877 { 00:16:51.877 "method": "sock_set_default_impl", 00:16:51.877 "params": { 00:16:51.877 "impl_name": "posix" 00:16:51.877 } 00:16:51.877 }, 00:16:51.877 { 00:16:51.878 "method": "sock_impl_set_options", 00:16:51.878 "params": { 00:16:51.878 "enable_ktls": false, 00:16:51.878 "enable_placement_id": 0, 00:16:51.878 "enable_quickack": false, 00:16:51.878 "enable_recv_pipe": true, 00:16:51.878 "enable_zerocopy_send_client": false, 00:16:51.878 "enable_zerocopy_send_server": true, 00:16:51.878 "impl_name": "ssl", 00:16:51.878 "recv_buf_size": 4096, 00:16:51.878 "send_buf_size": 4096, 00:16:51.878 "tls_version": 0, 00:16:51.878 "zerocopy_threshold": 0 00:16:51.878 } 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "method": "sock_impl_set_options", 00:16:51.878 "params": { 00:16:51.878 "enable_ktls": false, 00:16:51.878 "enable_placement_id": 0, 00:16:51.878 "enable_quickack": false, 00:16:51.878 "enable_recv_pipe": true, 00:16:51.878 "enable_zerocopy_send_client": false, 00:16:51.878 "enable_zerocopy_send_server": true, 00:16:51.878 "impl_name": "posix", 00:16:51.878 "recv_buf_size": 2097152, 00:16:51.878 "send_buf_size": 2097152, 00:16:51.878 "tls_version": 0, 00:16:51.878 "zerocopy_threshold": 0 00:16:51.878 } 00:16:51.878 } 00:16:51.878 ] 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "subsystem": "vmd", 00:16:51.878 "config": [] 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "subsystem": "accel", 00:16:51.878 "config": [ 00:16:51.878 { 00:16:51.878 "method": "accel_set_options", 00:16:51.878 "params": { 00:16:51.878 "buf_count": 2048, 00:16:51.878 "large_cache_size": 16, 00:16:51.878 "sequence_count": 2048, 00:16:51.878 "small_cache_size": 128, 00:16:51.878 "task_count": 2048 00:16:51.878 } 00:16:51.878 } 00:16:51.878 ] 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "subsystem": "bdev", 00:16:51.878 "config": [ 00:16:51.878 { 00:16:51.878 "method": "bdev_set_options", 00:16:51.878 "params": { 00:16:51.878 "bdev_auto_examine": true, 00:16:51.878 "bdev_io_cache_size": 256, 00:16:51.878 "bdev_io_pool_size": 65535, 00:16:51.878 "iobuf_large_cache_size": 16, 00:16:51.878 "iobuf_small_cache_size": 128 00:16:51.878 } 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "method": "bdev_raid_set_options", 00:16:51.878 "params": { 00:16:51.878 "process_window_size_kb": 1024 00:16:51.878 } 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "method": "bdev_iscsi_set_options", 00:16:51.878 "params": { 00:16:51.878 "timeout_sec": 30 00:16:51.878 } 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "method": "bdev_nvme_set_options", 00:16:51.878 "params": { 00:16:51.878 "action_on_timeout": "none", 00:16:51.878 "allow_accel_sequence": false, 00:16:51.878 "arbitration_burst": 0, 00:16:51.878 "bdev_retry_count": 3, 00:16:51.878 "ctrlr_loss_timeout_sec": 0, 00:16:51.878 "delay_cmd_submit": true, 00:16:51.878 "dhchap_dhgroups": [ 00:16:51.878 "null", 00:16:51.878 "ffdhe2048", 00:16:51.878 "ffdhe3072", 00:16:51.878 "ffdhe4096", 00:16:51.878 "ffdhe6144", 00:16:51.878 "ffdhe8192" 00:16:51.878 ], 00:16:51.878 "dhchap_digests": [ 00:16:51.878 "sha256", 00:16:51.878 "sha384", 00:16:51.878 "sha512" 00:16:51.878 ], 00:16:51.878 "disable_auto_failback": false, 00:16:51.878 "fast_io_fail_timeout_sec": 0, 00:16:51.878 "generate_uuids": false, 00:16:51.878 "high_priority_weight": 0, 00:16:51.878 "io_path_stat": false, 00:16:51.878 "io_queue_requests": 0, 00:16:51.878 "keep_alive_timeout_ms": 10000, 00:16:51.878 "low_priority_weight": 0, 00:16:51.878 "medium_priority_weight": 0, 00:16:51.878 "nvme_adminq_poll_period_us": 10000, 00:16:51.878 "nvme_error_stat": false, 00:16:51.878 "nvme_ioq_poll_period_us": 0, 00:16:51.878 "rdma_cm_event_timeout_ms": 0, 00:16:51.878 "rdma_max_cq_size": 0, 00:16:51.878 "rdma_srq_size": 0, 00:16:51.878 "reconnect_delay_sec": 0, 00:16:51.878 "timeout_admin_us": 0, 00:16:51.878 "timeout_us": 0, 00:16:51.878 "transport_ack_timeout": 0, 00:16:51.878 "transport_retry_count": 4, 00:16:51.878 "transport_tos": 0 00:16:51.878 } 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "method": "bdev_nvme_set_hotplug", 00:16:51.878 "params": { 00:16:51.878 "enable": false, 00:16:51.878 "period_us": 100000 00:16:51.878 } 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "method": "bdev_malloc_create", 00:16:51.878 "params": { 00:16:51.878 "block_size": 4096, 00:16:51.878 "name": "malloc0", 00:16:51.878 "num_blocks": 8192, 00:16:51.878 "optimal_io_boundary": 0, 00:16:51.878 "physical_block_size": 4096, 00:16:51.878 "uuid": "1b9affe7-9ba4-4755-9ae5-12ea688741cf" 00:16:51.878 } 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "method": "bdev_wait_for_examine" 00:16:51.878 } 00:16:51.878 ] 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "subsystem": "nbd", 00:16:51.878 "config": [] 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "subsystem": "scheduler", 00:16:51.878 "config": [ 00:16:51.878 { 00:16:51.878 "method": "framework_set_scheduler", 00:16:51.878 "params": { 00:16:51.878 "name": "static" 00:16:51.878 } 00:16:51.878 } 00:16:51.878 ] 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "subsystem": "nvmf", 00:16:51.878 "config": [ 00:16:51.878 { 00:16:51.878 "method": "nvmf_set_config", 00:16:51.878 "params": { 00:16:51.878 "admin_cmd_passthru": { 00:16:51.878 "identify_ctrlr": false 00:16:51.878 }, 00:16:51.878 "discovery_filter": "match_any" 00:16:51.878 } 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "method": "nvmf_set_max_subsystems", 00:16:51.878 "params": { 00:16:51.878 "max_subsystems": 1024 00:16:51.878 } 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "method": "nvmf_set_crdt", 00:16:51.878 "params": { 00:16:51.878 "crdt1": 0, 00:16:51.878 "crdt2": 0, 00:16:51.878 "crdt3": 0 00:16:51.878 } 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "method": "nvmf_create_transport", 00:16:51.878 "params": { 00:16:51.878 "abort_timeout_sec": 1, 00:16:51.878 "ack_timeout": 0, 00:16:51.878 "buf_cache_size": 4294967295, 00:16:51.878 "c2h_success": false, 00:16:51.878 "data_wr_pool_size": 0, 00:16:51.878 "dif_insert_or_strip": false, 00:16:51.878 "in_capsule_data_size": 4096, 00:16:51.878 "io_unit_size": 131072, 00:16:51.878 "max_aq_depth": 128, 00:16:51.878 "max_io_qpairs_per_ctrlr": 127, 00:16:51.878 "max_io_size": 131072, 00:16:51.878 "max_queue_depth": 128, 00:16:51.878 "num_shared_buffers": 511, 00:16:51.878 "sock_priority": 0, 00:16:51.878 "trtype": "TCP", 00:16:51.878 "zcopy": false 00:16:51.878 } 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "method": "nvmf_create_subsystem", 00:16:51.878 "params": { 00:16:51.878 "allow_any_host": false, 00:16:51.878 "ana_reporting": false, 00:16:51.878 "max_cntlid": 65519, 00:16:51.878 "max_namespaces": 32, 00:16:51.878 "min_cntlid": 1, 00:16:51.878 "model_number": "SPDK bdev Controller", 00:16:51.878 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.878 "serial_number": "00000000000000000000" 00:16:51.878 } 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "method": "nvmf_subsystem_add_host", 00:16:51.878 "params": { 00:16:51.878 "host": "nqn.2016-06.io.spdk:host1", 00:16:51.878 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.878 "psk": "key0" 00:16:51.878 } 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "method": "nvmf_subsystem_add_ns", 00:16:51.878 "params": { 00:16:51.878 "namespace": { 00:16:51.878 "bdev_name": "malloc0", 00:16:51.878 "nguid": "1B9AFFE79BA447559AE512EA688741CF", 00:16:51.878 "no_auto_visible": false, 00:16:51.878 "nsid": 1, 00:16:51.878 "uuid": "1b9affe7-9ba4-4755-9ae5-12ea688741cf" 00:16:51.878 }, 00:16:51.878 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:51.878 } 00:16:51.878 }, 00:16:51.878 { 00:16:51.878 "method": "nvmf_subsystem_add_listener", 00:16:51.878 "params": { 00:16:51.878 "listen_address": { 00:16:51.878 "adrfam": "IPv4", 00:16:51.878 "traddr": "10.0.0.2", 00:16:51.878 "trsvcid": "4420", 00:16:51.878 "trtype": "TCP" 00:16:51.878 }, 00:16:51.878 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.878 "secure_channel": false, 00:16:51.878 "sock_impl": "ssl" 00:16:51.878 } 00:16:51.878 } 00:16:51.878 ] 00:16:51.878 } 00:16:51.878 ] 00:16:51.878 }' 00:16:51.878 22:11:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:51.878 22:11:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85211 00:16:51.878 22:11:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85211 00:16:51.878 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85211 ']' 00:16:51.878 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.878 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.878 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.878 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.878 22:11:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.878 [2024-07-15 22:11:38.768348] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:51.878 [2024-07-15 22:11:38.768441] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.137 [2024-07-15 22:11:38.906276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.137 [2024-07-15 22:11:38.963782] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.137 [2024-07-15 22:11:38.963836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.137 [2024-07-15 22:11:38.963847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.137 [2024-07-15 22:11:38.963855] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.137 [2024-07-15 22:11:38.963862] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.137 [2024-07-15 22:11:38.963941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.405 [2024-07-15 22:11:39.154832] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.405 [2024-07-15 22:11:39.186739] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:52.405 [2024-07-15 22:11:39.186933] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=85255 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 85255 /var/tmp/bdevperf.sock 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85255 ']' 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:52.980 22:11:39 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:16:52.980 "subsystems": [ 00:16:52.980 { 00:16:52.980 "subsystem": "keyring", 00:16:52.980 "config": [ 00:16:52.980 { 00:16:52.980 "method": "keyring_file_add_key", 00:16:52.980 "params": { 00:16:52.980 "name": "key0", 00:16:52.980 "path": "/tmp/tmp.GdqwpZOJPt" 00:16:52.980 } 00:16:52.980 } 00:16:52.980 ] 00:16:52.980 }, 00:16:52.980 { 00:16:52.980 "subsystem": "iobuf", 00:16:52.981 "config": [ 00:16:52.981 { 00:16:52.981 "method": "iobuf_set_options", 00:16:52.981 "params": { 00:16:52.981 "large_bufsize": 135168, 00:16:52.981 "large_pool_count": 1024, 00:16:52.981 "small_bufsize": 8192, 00:16:52.981 "small_pool_count": 8192 00:16:52.981 } 00:16:52.981 } 00:16:52.981 ] 00:16:52.981 }, 00:16:52.981 { 00:16:52.981 "subsystem": "sock", 00:16:52.981 "config": [ 00:16:52.981 { 00:16:52.981 "method": "sock_set_default_impl", 00:16:52.981 "params": { 00:16:52.981 "impl_name": "posix" 00:16:52.981 } 00:16:52.981 }, 00:16:52.981 { 00:16:52.981 "method": "sock_impl_set_options", 00:16:52.981 "params": { 00:16:52.981 "enable_ktls": false, 00:16:52.981 "enable_placement_id": 0, 00:16:52.981 "enable_quickack": false, 00:16:52.981 "enable_recv_pipe": true, 00:16:52.981 "enable_zerocopy_send_client": false, 00:16:52.981 "enable_zerocopy_send_server": true, 00:16:52.981 "impl_name": "ssl", 00:16:52.981 "recv_buf_size": 4096, 00:16:52.981 "send_buf_size": 4096, 00:16:52.981 "tls_version": 0, 00:16:52.981 "zerocopy_threshold": 0 00:16:52.981 } 00:16:52.981 }, 00:16:52.981 { 00:16:52.981 "method": "sock_impl_set_options", 00:16:52.981 "params": { 00:16:52.981 "enable_ktls": false, 00:16:52.981 "enable_placement_id": 0, 00:16:52.981 "enable_quickack": false, 00:16:52.981 "enable_recv_pipe": true, 00:16:52.981 "enable_zerocopy_send_client": false, 00:16:52.981 "enable_zerocopy_send_server": true, 00:16:52.981 "impl_name": "posix", 00:16:52.981 "recv_buf_size": 2097152, 00:16:52.981 "send_buf_size": 2097152, 00:16:52.981 "tls_version": 0, 00:16:52.981 "zerocopy_threshold": 0 00:16:52.981 } 00:16:52.981 } 00:16:52.981 ] 00:16:52.981 }, 00:16:52.981 { 00:16:52.981 "subsystem": "vmd", 00:16:52.981 "config": [] 00:16:52.981 }, 00:16:52.981 { 00:16:52.981 "subsystem": "accel", 00:16:52.981 "config": [ 00:16:52.981 { 00:16:52.981 "method": "accel_set_options", 00:16:52.981 "params": { 00:16:52.981 "buf_count": 2048, 00:16:52.981 "large_cache_size": 16, 00:16:52.981 "sequence_count": 2048, 00:16:52.981 "small_cache_size": 128, 00:16:52.981 "task_count": 2048 00:16:52.981 } 00:16:52.981 } 00:16:52.981 ] 00:16:52.981 }, 00:16:52.981 { 00:16:52.981 "subsystem": "bdev", 00:16:52.981 "config": [ 00:16:52.981 { 00:16:52.981 "method": "bdev_set_options", 00:16:52.981 "params": { 00:16:52.981 "bdev_auto_examine": true, 00:16:52.981 "bdev_io_cache_size": 256, 00:16:52.981 "bdev_io_pool_size": 65535, 00:16:52.981 "iobuf_large_cache_size": 16, 00:16:52.981 "iobuf_small_cache_size": 128 00:16:52.981 } 00:16:52.981 }, 00:16:52.981 { 00:16:52.981 "method": "bdev_raid_set_options", 00:16:52.981 "params": { 00:16:52.981 "process_window_size_kb": 1024 00:16:52.981 } 00:16:52.981 }, 00:16:52.981 { 00:16:52.981 "method": "bdev_iscsi_set_options", 00:16:52.981 "params": { 00:16:52.981 "timeout_sec": 30 00:16:52.981 } 00:16:52.981 }, 00:16:52.981 { 00:16:52.981 "method": "bdev_nvme_set_options", 00:16:52.981 "params": { 00:16:52.981 "action_on_timeout": "none", 00:16:52.981 "allow_accel_sequence": false, 00:16:52.981 "arbitration_burst": 0, 00:16:52.981 "bdev_retry_count": 3, 00:16:52.981 "ctrlr_loss_timeout_sec": 0, 00:16:52.981 "delay_cmd_submit": true, 00:16:52.981 "dhchap_dhgroups": [ 00:16:52.981 "null", 00:16:52.981 "ffdhe2048", 00:16:52.981 "ffdhe3072", 00:16:52.981 "ffdhe4096", 00:16:52.981 "ffdhe6144", 00:16:52.981 "ffdhe8192" 00:16:52.981 ], 00:16:52.981 "dhchap_digests": [ 00:16:52.981 "sha256", 00:16:52.981 "sha384", 00:16:52.981 "sha512" 00:16:52.981 ], 00:16:52.981 "disable_auto_failback": false, 00:16:52.981 "fast_io_fail_timeout_sec": 0, 00:16:52.981 "generate_uuids": false, 00:16:52.981 "high_priority_weight": 0, 00:16:52.981 "io_path_stat": false, 00:16:52.981 "io_queue_requests": 512, 00:16:52.981 "keep_alive_timeout_ms": 10000, 00:16:52.981 "low_priority_weight": 0, 00:16:52.981 "medium_priority_weight": 0, 00:16:52.981 "nvme_adminq_poll_period_us": 10000, 00:16:52.981 "nvme_error_stat": false, 00:16:52.981 "nvme_ioq_poll_period_us": 0, 00:16:52.981 "rdma_cm_event_timeout_ms": 0, 00:16:52.981 "rdma_max_cq_size": 0, 00:16:52.981 "rdma_srq_size": 0, 00:16:52.981 "reconnect_delay_sec": 0, 00:16:52.981 "timeout_admin_us": 0, 00:16:52.981 "timeout_us": 0, 00:16:52.981 "transport_ack_timeout": 0, 00:16:52.981 "transport_retry_count": 4, 00:16:52.981 "transport_tos": 0 00:16:52.981 } 00:16:52.981 }, 00:16:52.981 { 00:16:52.981 "method": "bdev_nvme_attach_controller", 00:16:52.981 "params": { 00:16:52.981 "adrfam": "IPv4", 00:16:52.981 "ctrlr_loss_timeout_sec": 0, 00:16:52.981 "ddgst": false, 00:16:52.981 "fast_io_fail_timeout_sec": 0, 00:16:52.981 "hdgst": false, 00:16:52.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.981 "name": "nvme0", 00:16:52.981 "prchk_guard": false, 00:16:52.981 "prchk_reftag": false, 00:16:52.981 "psk": "key0", 00:16:52.981 "reconnect_delay_sec": 0, 00:16:52.981 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.981 "traddr": "10.0.0.2", 00:16:52.981 "trsvcid": "4420", 00:16:52.981 "trtype": "TCP" 00:16:52.981 } 00:16:52.981 }, 00:16:52.981 { 00:16:52.981 "method": "bdev_nvme_set_hotplug", 00:16:52.981 "params": { 00:16:52.981 "enable": false, 00:16:52.981 "period_us": 100000 00:16:52.981 } 00:16:52.981 }, 00:16:52.981 { 00:16:52.981 "method": "bdev_enable_histogram", 00:16:52.981 "params": { 00:16:52.981 "enable": true, 00:16:52.981 "name": "nvme0n1" 00:16:52.981 } 00:16:52.981 }, 00:16:52.981 { 00:16:52.981 "method": "bdev_wait_for_examine" 00:16:52.981 } 00:16:52.981 ] 00:16:52.981 }, 00:16:52.981 { 00:16:52.981 "subsystem": "nbd", 00:16:52.981 "config": [] 00:16:52.981 } 00:16:52.981 ] 00:16:52.981 }' 00:16:52.981 [2024-07-15 22:11:39.900059] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:52.981 [2024-07-15 22:11:39.900203] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85255 ] 00:16:53.240 [2024-07-15 22:11:40.034977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.240 [2024-07-15 22:11:40.095271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.498 [2024-07-15 22:11:40.229850] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:54.066 22:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.066 22:11:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:54.066 22:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:54.066 22:11:40 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:16:54.324 22:11:41 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.324 22:11:41 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:54.324 Running I/O for 1 seconds... 00:16:55.699 00:16:55.699 Latency(us) 00:16:55.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.699 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:55.699 Verification LBA range: start 0x0 length 0x2000 00:16:55.699 nvme0n1 : 1.02 3973.53 15.52 0.00 0.00 31868.95 7268.54 25618.62 00:16:55.699 =================================================================================================================== 00:16:55.699 Total : 3973.53 15.52 0.00 0.00 31868.95 7268.54 25618.62 00:16:55.699 0 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:55.699 nvmf_trace.0 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 85255 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85255 ']' 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85255 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85255 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:55.699 killing process with pid 85255 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85255' 00:16:55.699 Received shutdown signal, test time was about 1.000000 seconds 00:16:55.699 00:16:55.699 Latency(us) 00:16:55.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.699 =================================================================================================================== 00:16:55.699 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85255 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85255 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.699 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:55.699 rmmod nvme_tcp 00:16:55.699 rmmod nvme_fabrics 00:16:55.957 rmmod nvme_keyring 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 85211 ']' 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 85211 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85211 ']' 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85211 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85211 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85211' 00:16:55.957 killing process with pid 85211 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85211 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85211 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.NjQlOJ9AJo /tmp/tmp.1Pbpt2IxJG /tmp/tmp.GdqwpZOJPt 00:16:55.957 00:16:55.957 real 1m21.718s 00:16:55.957 user 2m10.944s 00:16:55.957 sys 0m26.498s 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:55.957 ************************************ 00:16:55.957 END TEST nvmf_tls 00:16:55.957 ************************************ 00:16:55.957 22:11:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.215 22:11:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:56.215 22:11:42 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:56.215 22:11:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:56.215 22:11:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.215 22:11:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:56.215 ************************************ 00:16:56.215 START TEST nvmf_fips 00:16:56.215 ************************************ 00:16:56.215 22:11:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:56.215 * Looking for test storage... 00:16:56.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.215 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:16:56.216 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:16:56.475 Error setting digest 00:16:56.475 0052E5BA8A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:16:56.475 0052E5BA8A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:56.475 Cannot find device "nvmf_tgt_br" 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.475 Cannot find device "nvmf_tgt_br2" 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:56.475 Cannot find device "nvmf_tgt_br" 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:56.475 Cannot find device "nvmf_tgt_br2" 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:56.475 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:56.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:16:56.733 00:16:56.733 --- 10.0.0.2 ping statistics --- 00:16:56.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.733 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:56.733 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:56.733 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:56.733 00:16:56.733 --- 10.0.0.3 ping statistics --- 00:16:56.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.733 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:56.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:16:56.733 00:16:56.733 --- 10.0.0.1 ping statistics --- 00:16:56.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.733 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=85541 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 85541 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85541 ']' 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.733 22:11:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:56.991 [2024-07-15 22:11:43.693970] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:56.991 [2024-07-15 22:11:43.694066] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.991 [2024-07-15 22:11:43.829552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.991 [2024-07-15 22:11:43.888272] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.991 [2024-07-15 22:11:43.888322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.991 [2024-07-15 22:11:43.888334] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.991 [2024-07-15 22:11:43.888342] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.991 [2024-07-15 22:11:43.888349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.991 [2024-07-15 22:11:43.888379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.925 22:11:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.926 22:11:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:16:57.926 22:11:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.926 22:11:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:57.926 22:11:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:57.926 22:11:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.926 22:11:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:16:57.926 22:11:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:57.926 22:11:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:57.926 22:11:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:57.926 22:11:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:57.926 22:11:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:57.926 22:11:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:57.926 22:11:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:58.184 [2024-07-15 22:11:44.934767] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.184 [2024-07-15 22:11:44.950713] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:58.184 [2024-07-15 22:11:44.950908] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.184 [2024-07-15 22:11:44.977417] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:58.184 malloc0 00:16:58.184 22:11:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:58.184 22:11:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85593 00:16:58.184 22:11:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:58.184 22:11:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85593 /var/tmp/bdevperf.sock 00:16:58.184 22:11:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85593 ']' 00:16:58.184 22:11:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.184 22:11:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.184 22:11:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.184 22:11:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.184 22:11:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:58.184 [2024-07-15 22:11:45.069261] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:16:58.184 [2024-07-15 22:11:45.069356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85593 ] 00:16:58.441 [2024-07-15 22:11:45.207427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.441 [2024-07-15 22:11:45.267715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.441 22:11:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.441 22:11:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:16:58.441 22:11:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:58.734 [2024-07-15 22:11:45.594928] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:58.734 [2024-07-15 22:11:45.595034] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:58.734 TLSTESTn1 00:16:58.993 22:11:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:58.993 Running I/O for 10 seconds... 00:17:08.969 00:17:08.969 Latency(us) 00:17:08.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.969 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:08.969 Verification LBA range: start 0x0 length 0x2000 00:17:08.969 TLSTESTn1 : 10.02 3780.69 14.77 0.00 0.00 33791.57 7030.23 54096.99 00:17:08.969 =================================================================================================================== 00:17:08.969 Total : 3780.69 14.77 0.00 0.00 33791.57 7030.23 54096.99 00:17:08.969 0 00:17:08.969 22:11:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:08.969 22:11:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:08.969 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:17:08.969 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:17:08.969 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:08.969 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:08.969 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:08.969 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:08.969 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:08.969 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:08.969 nvmf_trace.0 00:17:09.228 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:17:09.228 22:11:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85593 00:17:09.228 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85593 ']' 00:17:09.228 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85593 00:17:09.228 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:17:09.228 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:09.228 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85593 00:17:09.228 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:09.228 killing process with pid 85593 00:17:09.228 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:09.228 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85593' 00:17:09.228 Received shutdown signal, test time was about 10.000000 seconds 00:17:09.228 00:17:09.228 Latency(us) 00:17:09.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.228 =================================================================================================================== 00:17:09.228 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.228 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85593 00:17:09.228 [2024-07-15 22:11:55.963810] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:09.228 22:11:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85593 00:17:09.228 22:11:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:09.228 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:09.228 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:09.487 rmmod nvme_tcp 00:17:09.487 rmmod nvme_fabrics 00:17:09.487 rmmod nvme_keyring 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 85541 ']' 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 85541 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85541 ']' 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85541 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85541 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85541' 00:17:09.487 killing process with pid 85541 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85541 00:17:09.487 [2024-07-15 22:11:56.275929] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:09.487 22:11:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85541 00:17:09.747 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:09.747 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:09.747 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:09.747 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:09.747 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:09.747 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.747 22:11:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.747 22:11:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.747 22:11:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:09.747 22:11:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:09.747 00:17:09.747 real 0m13.534s 00:17:09.747 user 0m17.964s 00:17:09.747 sys 0m5.517s 00:17:09.747 22:11:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:09.747 22:11:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:09.747 ************************************ 00:17:09.747 END TEST nvmf_fips 00:17:09.747 ************************************ 00:17:09.747 22:11:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:09.747 22:11:56 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:17:09.747 22:11:56 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:17:09.747 22:11:56 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:17:09.747 22:11:56 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:09.747 22:11:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:09.747 22:11:56 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:17:09.747 22:11:56 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:09.747 22:11:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:09.747 22:11:56 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:17:09.747 22:11:56 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:09.747 22:11:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:09.747 22:11:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.747 22:11:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:09.747 ************************************ 00:17:09.747 START TEST nvmf_multicontroller 00:17:09.747 ************************************ 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:09.747 * Looking for test storage... 00:17:09.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.747 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:09.748 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:10.007 Cannot find device "nvmf_tgt_br" 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:10.007 Cannot find device "nvmf_tgt_br2" 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:10.007 Cannot find device "nvmf_tgt_br" 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:10.007 Cannot find device "nvmf_tgt_br2" 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:10.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:10.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:10.007 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:10.265 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:10.265 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:10.265 22:11:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:10.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:17:10.265 00:17:10.265 --- 10.0.0.2 ping statistics --- 00:17:10.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.265 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:10.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:10.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:17:10.265 00:17:10.265 --- 10.0.0.3 ping statistics --- 00:17:10.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.265 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:10.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:10.265 00:17:10.265 --- 10.0.0.1 ping statistics --- 00:17:10.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.265 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=85942 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 85942 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:10.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 85942 ']' 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.265 22:11:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:10.265 [2024-07-15 22:11:57.094194] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:17:10.265 [2024-07-15 22:11:57.094486] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.524 [2024-07-15 22:11:57.234235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:10.524 [2024-07-15 22:11:57.295641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.524 [2024-07-15 22:11:57.295686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.524 [2024-07-15 22:11:57.295714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.524 [2024-07-15 22:11:57.295723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.524 [2024-07-15 22:11:57.295730] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.524 [2024-07-15 22:11:57.295840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.524 [2024-07-15 22:11:57.296515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.524 [2024-07-15 22:11:57.296560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.460 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.460 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:17:11.460 22:11:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:11.460 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:11.460 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.460 22:11:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.460 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:11.460 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.460 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.460 [2024-07-15 22:11:58.109248] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.460 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.460 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:11.460 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 Malloc0 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 [2024-07-15 22:11:58.164297] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 [2024-07-15 22:11:58.172228] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 Malloc1 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85994 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85994 /var/tmp/bdevperf.sock 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 85994 ']' 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.461 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 NVMe0n1 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.720 1 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 2024/07/15 22:11:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:11.720 request: 00:17:11.720 { 00:17:11.720 "method": "bdev_nvme_attach_controller", 00:17:11.720 "params": { 00:17:11.720 "name": "NVMe0", 00:17:11.720 "trtype": "tcp", 00:17:11.720 "traddr": "10.0.0.2", 00:17:11.720 "adrfam": "ipv4", 00:17:11.720 "trsvcid": "4420", 00:17:11.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.720 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:11.720 "hostaddr": "10.0.0.2", 00:17:11.720 "hostsvcid": "60000", 00:17:11.720 "prchk_reftag": false, 00:17:11.720 "prchk_guard": false, 00:17:11.720 "hdgst": false, 00:17:11.720 "ddgst": false 00:17:11.720 } 00:17:11.720 } 00:17:11.720 Got JSON-RPC error response 00:17:11.720 GoRPCClient: error on JSON-RPC call 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.720 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.988 2024/07/15 22:11:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:11.988 request: 00:17:11.988 { 00:17:11.988 "method": "bdev_nvme_attach_controller", 00:17:11.988 "params": { 00:17:11.988 "name": "NVMe0", 00:17:11.988 "trtype": "tcp", 00:17:11.988 "traddr": "10.0.0.2", 00:17:11.988 "adrfam": "ipv4", 00:17:11.988 "trsvcid": "4420", 00:17:11.988 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:11.988 "hostaddr": "10.0.0.2", 00:17:11.988 "hostsvcid": "60000", 00:17:11.988 "prchk_reftag": false, 00:17:11.988 "prchk_guard": false, 00:17:11.988 "hdgst": false, 00:17:11.988 "ddgst": false 00:17:11.988 } 00:17:11.988 } 00:17:11.988 Got JSON-RPC error response 00:17:11.988 GoRPCClient: error on JSON-RPC call 00:17:11.988 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:11.988 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:11.988 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.988 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.988 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.988 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:11.988 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:11.988 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:11.988 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:11.988 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.988 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:11.988 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.988 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.989 2024/07/15 22:11:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:17:11.989 request: 00:17:11.989 { 00:17:11.989 "method": "bdev_nvme_attach_controller", 00:17:11.989 "params": { 00:17:11.989 "name": "NVMe0", 00:17:11.989 "trtype": "tcp", 00:17:11.989 "traddr": "10.0.0.2", 00:17:11.989 "adrfam": "ipv4", 00:17:11.989 "trsvcid": "4420", 00:17:11.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.989 "hostaddr": "10.0.0.2", 00:17:11.989 "hostsvcid": "60000", 00:17:11.989 "prchk_reftag": false, 00:17:11.989 "prchk_guard": false, 00:17:11.989 "hdgst": false, 00:17:11.989 "ddgst": false, 00:17:11.989 "multipath": "disable" 00:17:11.989 } 00:17:11.989 } 00:17:11.989 Got JSON-RPC error response 00:17:11.989 GoRPCClient: error on JSON-RPC call 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.989 2024/07/15 22:11:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:11.989 request: 00:17:11.989 { 00:17:11.989 "method": "bdev_nvme_attach_controller", 00:17:11.989 "params": { 00:17:11.989 "name": "NVMe0", 00:17:11.989 "trtype": "tcp", 00:17:11.989 "traddr": "10.0.0.2", 00:17:11.989 "adrfam": "ipv4", 00:17:11.989 "trsvcid": "4420", 00:17:11.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.989 "hostaddr": "10.0.0.2", 00:17:11.989 "hostsvcid": "60000", 00:17:11.989 "prchk_reftag": false, 00:17:11.989 "prchk_guard": false, 00:17:11.989 "hdgst": false, 00:17:11.989 "ddgst": false, 00:17:11.989 "multipath": "failover" 00:17:11.989 } 00:17:11.989 } 00:17:11.989 Got JSON-RPC error response 00:17:11.989 GoRPCClient: error on JSON-RPC call 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.989 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.989 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:11.989 22:11:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:13.392 0 00:17:13.392 22:11:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:13.392 22:11:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.392 22:11:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 85994 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 85994 ']' 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 85994 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85994 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:13.392 killing process with pid 85994 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85994' 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 85994 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 85994 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:17:13.392 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:17:13.392 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:13.392 [2024-07-15 22:11:58.276577] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:17:13.392 [2024-07-15 22:11:58.276750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85994 ] 00:17:13.392 [2024-07-15 22:11:58.413509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.392 [2024-07-15 22:11:58.472946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.392 [2024-07-15 22:11:58.852075] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 1e1f6b03-c0c5-4931-a183-0b4f756acbea already exists 00:17:13.392 [2024-07-15 22:11:58.852156] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:1e1f6b03-c0c5-4931-a183-0b4f756acbea alias for bdev NVMe1n1 00:17:13.392 [2024-07-15 22:11:58.852176] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:13.392 Running I/O for 1 seconds... 00:17:13.392 00:17:13.392 Latency(us) 00:17:13.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.392 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:13.392 NVMe0n1 : 1.00 18898.71 73.82 0.00 0.00 6761.70 3515.11 11856.06 00:17:13.392 =================================================================================================================== 00:17:13.392 Total : 18898.71 73.82 0.00 0.00 6761.70 3515.11 11856.06 00:17:13.392 Received shutdown signal, test time was about 1.000000 seconds 00:17:13.392 00:17:13.392 Latency(us) 00:17:13.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.392 =================================================================================================================== 00:17:13.392 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:13.392 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:13.393 rmmod nvme_tcp 00:17:13.393 rmmod nvme_fabrics 00:17:13.393 rmmod nvme_keyring 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 85942 ']' 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 85942 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 85942 ']' 00:17:13.393 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 85942 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85942 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:13.651 killing process with pid 85942 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85942' 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 85942 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 85942 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.651 22:12:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:13.910 00:17:13.910 real 0m4.023s 00:17:13.910 user 0m12.036s 00:17:13.910 sys 0m0.897s 00:17:13.910 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.910 22:12:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:13.910 ************************************ 00:17:13.910 END TEST nvmf_multicontroller 00:17:13.911 ************************************ 00:17:13.911 22:12:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:13.911 22:12:00 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:13.911 22:12:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:13.911 22:12:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.911 22:12:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:13.911 ************************************ 00:17:13.911 START TEST nvmf_aer 00:17:13.911 ************************************ 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:13.911 * Looking for test storage... 00:17:13.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:13.911 Cannot find device "nvmf_tgt_br" 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.911 Cannot find device "nvmf_tgt_br2" 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:13.911 Cannot find device "nvmf_tgt_br" 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:13.911 Cannot find device "nvmf_tgt_br2" 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:17:13.911 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:14.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:14.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:14.170 22:12:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:14.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:17:14.170 00:17:14.170 --- 10.0.0.2 ping statistics --- 00:17:14.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.170 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:14.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:14.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:17:14.170 00:17:14.170 --- 10.0.0.3 ping statistics --- 00:17:14.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.170 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:14.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:14.170 00:17:14.170 --- 10.0.0.1 ping statistics --- 00:17:14.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.170 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:14.170 22:12:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.429 22:12:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:14.429 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=86228 00:17:14.429 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 86228 00:17:14.429 22:12:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:14.429 22:12:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 86228 ']' 00:17:14.429 22:12:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.429 22:12:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.429 22:12:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.429 22:12:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.429 22:12:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:14.429 [2024-07-15 22:12:01.190046] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:17:14.430 [2024-07-15 22:12:01.190183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.430 [2024-07-15 22:12:01.329432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:14.688 [2024-07-15 22:12:01.390416] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.688 [2024-07-15 22:12:01.390468] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.688 [2024-07-15 22:12:01.390480] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.688 [2024-07-15 22:12:01.390488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.688 [2024-07-15 22:12:01.390495] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.688 [2024-07-15 22:12:01.390664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.688 [2024-07-15 22:12:01.390974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.688 [2024-07-15 22:12:01.391243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.688 [2024-07-15 22:12:01.391251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:15.623 [2024-07-15 22:12:02.253953] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:15.623 Malloc0 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:15.623 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:15.624 [2024-07-15 22:12:02.309554] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:15.624 [ 00:17:15.624 { 00:17:15.624 "allow_any_host": true, 00:17:15.624 "hosts": [], 00:17:15.624 "listen_addresses": [], 00:17:15.624 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:15.624 "subtype": "Discovery" 00:17:15.624 }, 00:17:15.624 { 00:17:15.624 "allow_any_host": true, 00:17:15.624 "hosts": [], 00:17:15.624 "listen_addresses": [ 00:17:15.624 { 00:17:15.624 "adrfam": "IPv4", 00:17:15.624 "traddr": "10.0.0.2", 00:17:15.624 "trsvcid": "4420", 00:17:15.624 "trtype": "TCP" 00:17:15.624 } 00:17:15.624 ], 00:17:15.624 "max_cntlid": 65519, 00:17:15.624 "max_namespaces": 2, 00:17:15.624 "min_cntlid": 1, 00:17:15.624 "model_number": "SPDK bdev Controller", 00:17:15.624 "namespaces": [ 00:17:15.624 { 00:17:15.624 "bdev_name": "Malloc0", 00:17:15.624 "name": "Malloc0", 00:17:15.624 "nguid": "48BDF05562E74EA0ACD1522A5B5D1312", 00:17:15.624 "nsid": 1, 00:17:15.624 "uuid": "48bdf055-62e7-4ea0-acd1-522a5b5d1312" 00:17:15.624 } 00:17:15.624 ], 00:17:15.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.624 "serial_number": "SPDK00000000000001", 00:17:15.624 "subtype": "NVMe" 00:17:15.624 } 00:17:15.624 ] 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=86282 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.624 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:15.882 Malloc1 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:15.882 Asynchronous Event Request test 00:17:15.882 Attaching to 10.0.0.2 00:17:15.882 Attached to 10.0.0.2 00:17:15.882 Registering asynchronous event callbacks... 00:17:15.882 Starting namespace attribute notice tests for all controllers... 00:17:15.882 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:15.882 aer_cb - Changed Namespace 00:17:15.882 Cleaning up... 00:17:15.882 [ 00:17:15.882 { 00:17:15.882 "allow_any_host": true, 00:17:15.882 "hosts": [], 00:17:15.882 "listen_addresses": [], 00:17:15.882 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:15.882 "subtype": "Discovery" 00:17:15.882 }, 00:17:15.882 { 00:17:15.882 "allow_any_host": true, 00:17:15.882 "hosts": [], 00:17:15.882 "listen_addresses": [ 00:17:15.882 { 00:17:15.882 "adrfam": "IPv4", 00:17:15.882 "traddr": "10.0.0.2", 00:17:15.882 "trsvcid": "4420", 00:17:15.882 "trtype": "TCP" 00:17:15.882 } 00:17:15.882 ], 00:17:15.882 "max_cntlid": 65519, 00:17:15.882 "max_namespaces": 2, 00:17:15.882 "min_cntlid": 1, 00:17:15.882 "model_number": "SPDK bdev Controller", 00:17:15.882 "namespaces": [ 00:17:15.882 { 00:17:15.882 "bdev_name": "Malloc0", 00:17:15.882 "name": "Malloc0", 00:17:15.882 "nguid": "48BDF05562E74EA0ACD1522A5B5D1312", 00:17:15.882 "nsid": 1, 00:17:15.882 "uuid": "48bdf055-62e7-4ea0-acd1-522a5b5d1312" 00:17:15.882 }, 00:17:15.882 { 00:17:15.882 "bdev_name": "Malloc1", 00:17:15.882 "name": "Malloc1", 00:17:15.882 "nguid": "0CAAD5BD4B0C4AE8943C3E8FAFBA96B7", 00:17:15.882 "nsid": 2, 00:17:15.882 "uuid": "0caad5bd-4b0c-4ae8-943c-3e8fafba96b7" 00:17:15.882 } 00:17:15.882 ], 00:17:15.882 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.882 "serial_number": "SPDK00000000000001", 00:17:15.882 "subtype": "NVMe" 00:17:15.882 } 00:17:15.882 ] 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 86282 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:15.882 rmmod nvme_tcp 00:17:15.882 rmmod nvme_fabrics 00:17:15.882 rmmod nvme_keyring 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 86228 ']' 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 86228 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 86228 ']' 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 86228 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:17:15.882 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:15.883 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86228 00:17:15.883 killing process with pid 86228 00:17:15.883 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:15.883 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:15.883 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86228' 00:17:15.883 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 86228 00:17:15.883 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 86228 00:17:16.140 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:16.140 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:16.140 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:16.140 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.140 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:16.140 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.141 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.141 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.141 22:12:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:16.141 00:17:16.141 real 0m2.326s 00:17:16.141 user 0m6.536s 00:17:16.141 sys 0m0.560s 00:17:16.141 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:16.141 ************************************ 00:17:16.141 22:12:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:16.141 END TEST nvmf_aer 00:17:16.141 ************************************ 00:17:16.141 22:12:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:16.141 22:12:03 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:16.141 22:12:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:16.141 22:12:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:16.141 22:12:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:16.141 ************************************ 00:17:16.141 START TEST nvmf_async_init 00:17:16.141 ************************************ 00:17:16.141 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:16.399 * Looking for test storage... 00:17:16.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=055ca75e9616404981e0c4d4e420c4e5 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:16.399 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:16.400 Cannot find device "nvmf_tgt_br" 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:16.400 Cannot find device "nvmf_tgt_br2" 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:16.400 Cannot find device "nvmf_tgt_br" 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:16.400 Cannot find device "nvmf_tgt_br2" 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:16.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:16.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:16.400 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:16.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:17:16.658 00:17:16.658 --- 10.0.0.2 ping statistics --- 00:17:16.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.658 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:16.658 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:16.658 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:17:16.658 00:17:16.658 --- 10.0.0.3 ping statistics --- 00:17:16.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.658 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:16.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:16.658 00:17:16.658 --- 10.0.0.1 ping statistics --- 00:17:16.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.658 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86455 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86455 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86455 ']' 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.658 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:16.658 [2024-07-15 22:12:03.571869] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:17:16.658 [2024-07-15 22:12:03.571971] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.915 [2024-07-15 22:12:03.705653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.915 [2024-07-15 22:12:03.763167] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.915 [2024-07-15 22:12:03.763222] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.915 [2024-07-15 22:12:03.763233] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.915 [2024-07-15 22:12:03.763242] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.915 [2024-07-15 22:12:03.763250] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.915 [2024-07-15 22:12:03.763290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.915 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.915 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:17:16.915 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.915 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:16.915 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.173 [2024-07-15 22:12:03.889823] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.173 null0 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 055ca75e9616404981e0c4d4e420c4e5 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.173 [2024-07-15 22:12:03.937946] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.173 22:12:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.452 nvme0n1 00:17:17.452 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.452 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:17.452 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.452 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.452 [ 00:17:17.452 { 00:17:17.452 "aliases": [ 00:17:17.452 "055ca75e-9616-4049-81e0-c4d4e420c4e5" 00:17:17.452 ], 00:17:17.452 "assigned_rate_limits": { 00:17:17.452 "r_mbytes_per_sec": 0, 00:17:17.452 "rw_ios_per_sec": 0, 00:17:17.452 "rw_mbytes_per_sec": 0, 00:17:17.452 "w_mbytes_per_sec": 0 00:17:17.452 }, 00:17:17.452 "block_size": 512, 00:17:17.452 "claimed": false, 00:17:17.452 "driver_specific": { 00:17:17.452 "mp_policy": "active_passive", 00:17:17.452 "nvme": [ 00:17:17.452 { 00:17:17.452 "ctrlr_data": { 00:17:17.452 "ana_reporting": false, 00:17:17.452 "cntlid": 1, 00:17:17.452 "firmware_revision": "24.09", 00:17:17.452 "model_number": "SPDK bdev Controller", 00:17:17.452 "multi_ctrlr": true, 00:17:17.452 "oacs": { 00:17:17.452 "firmware": 0, 00:17:17.452 "format": 0, 00:17:17.452 "ns_manage": 0, 00:17:17.452 "security": 0 00:17:17.452 }, 00:17:17.452 "serial_number": "00000000000000000000", 00:17:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:17.453 "vendor_id": "0x8086" 00:17:17.453 }, 00:17:17.453 "ns_data": { 00:17:17.453 "can_share": true, 00:17:17.453 "id": 1 00:17:17.453 }, 00:17:17.453 "trid": { 00:17:17.453 "adrfam": "IPv4", 00:17:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:17.453 "traddr": "10.0.0.2", 00:17:17.453 "trsvcid": "4420", 00:17:17.453 "trtype": "TCP" 00:17:17.453 }, 00:17:17.453 "vs": { 00:17:17.453 "nvme_version": "1.3" 00:17:17.453 } 00:17:17.453 } 00:17:17.453 ] 00:17:17.453 }, 00:17:17.453 "memory_domains": [ 00:17:17.453 { 00:17:17.453 "dma_device_id": "system", 00:17:17.453 "dma_device_type": 1 00:17:17.453 } 00:17:17.453 ], 00:17:17.453 "name": "nvme0n1", 00:17:17.453 "num_blocks": 2097152, 00:17:17.453 "product_name": "NVMe disk", 00:17:17.453 "supported_io_types": { 00:17:17.453 "abort": true, 00:17:17.453 "compare": true, 00:17:17.453 "compare_and_write": true, 00:17:17.453 "copy": true, 00:17:17.453 "flush": true, 00:17:17.453 "get_zone_info": false, 00:17:17.453 "nvme_admin": true, 00:17:17.453 "nvme_io": true, 00:17:17.453 "nvme_io_md": false, 00:17:17.453 "nvme_iov_md": false, 00:17:17.453 "read": true, 00:17:17.453 "reset": true, 00:17:17.453 "seek_data": false, 00:17:17.453 "seek_hole": false, 00:17:17.453 "unmap": false, 00:17:17.453 "write": true, 00:17:17.453 "write_zeroes": true, 00:17:17.453 "zcopy": false, 00:17:17.453 "zone_append": false, 00:17:17.453 "zone_management": false 00:17:17.453 }, 00:17:17.453 "uuid": "055ca75e-9616-4049-81e0-c4d4e420c4e5", 00:17:17.453 "zoned": false 00:17:17.453 } 00:17:17.453 ] 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.453 [2024-07-15 22:12:04.199702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:17.453 [2024-07-15 22:12:04.199801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18afa30 (9): Bad file descriptor 00:17:17.453 [2024-07-15 22:12:04.332284] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.453 [ 00:17:17.453 { 00:17:17.453 "aliases": [ 00:17:17.453 "055ca75e-9616-4049-81e0-c4d4e420c4e5" 00:17:17.453 ], 00:17:17.453 "assigned_rate_limits": { 00:17:17.453 "r_mbytes_per_sec": 0, 00:17:17.453 "rw_ios_per_sec": 0, 00:17:17.453 "rw_mbytes_per_sec": 0, 00:17:17.453 "w_mbytes_per_sec": 0 00:17:17.453 }, 00:17:17.453 "block_size": 512, 00:17:17.453 "claimed": false, 00:17:17.453 "driver_specific": { 00:17:17.453 "mp_policy": "active_passive", 00:17:17.453 "nvme": [ 00:17:17.453 { 00:17:17.453 "ctrlr_data": { 00:17:17.453 "ana_reporting": false, 00:17:17.453 "cntlid": 2, 00:17:17.453 "firmware_revision": "24.09", 00:17:17.453 "model_number": "SPDK bdev Controller", 00:17:17.453 "multi_ctrlr": true, 00:17:17.453 "oacs": { 00:17:17.453 "firmware": 0, 00:17:17.453 "format": 0, 00:17:17.453 "ns_manage": 0, 00:17:17.453 "security": 0 00:17:17.453 }, 00:17:17.453 "serial_number": "00000000000000000000", 00:17:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:17.453 "vendor_id": "0x8086" 00:17:17.453 }, 00:17:17.453 "ns_data": { 00:17:17.453 "can_share": true, 00:17:17.453 "id": 1 00:17:17.453 }, 00:17:17.453 "trid": { 00:17:17.453 "adrfam": "IPv4", 00:17:17.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:17.453 "traddr": "10.0.0.2", 00:17:17.453 "trsvcid": "4420", 00:17:17.453 "trtype": "TCP" 00:17:17.453 }, 00:17:17.453 "vs": { 00:17:17.453 "nvme_version": "1.3" 00:17:17.453 } 00:17:17.453 } 00:17:17.453 ] 00:17:17.453 }, 00:17:17.453 "memory_domains": [ 00:17:17.453 { 00:17:17.453 "dma_device_id": "system", 00:17:17.453 "dma_device_type": 1 00:17:17.453 } 00:17:17.453 ], 00:17:17.453 "name": "nvme0n1", 00:17:17.453 "num_blocks": 2097152, 00:17:17.453 "product_name": "NVMe disk", 00:17:17.453 "supported_io_types": { 00:17:17.453 "abort": true, 00:17:17.453 "compare": true, 00:17:17.453 "compare_and_write": true, 00:17:17.453 "copy": true, 00:17:17.453 "flush": true, 00:17:17.453 "get_zone_info": false, 00:17:17.453 "nvme_admin": true, 00:17:17.453 "nvme_io": true, 00:17:17.453 "nvme_io_md": false, 00:17:17.453 "nvme_iov_md": false, 00:17:17.453 "read": true, 00:17:17.453 "reset": true, 00:17:17.453 "seek_data": false, 00:17:17.453 "seek_hole": false, 00:17:17.453 "unmap": false, 00:17:17.453 "write": true, 00:17:17.453 "write_zeroes": true, 00:17:17.453 "zcopy": false, 00:17:17.453 "zone_append": false, 00:17:17.453 "zone_management": false 00:17:17.453 }, 00:17:17.453 "uuid": "055ca75e-9616-4049-81e0-c4d4e420c4e5", 00:17:17.453 "zoned": false 00:17:17.453 } 00:17:17.453 ] 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.vaVKlIeuup 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.vaVKlIeuup 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.453 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.711 [2024-07-15 22:12:04.403909] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:17.711 [2024-07-15 22:12:04.404128] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vaVKlIeuup 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.711 [2024-07-15 22:12:04.411883] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.vaVKlIeuup 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.711 [2024-07-15 22:12:04.423899] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:17.711 [2024-07-15 22:12:04.423970] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:17.711 nvme0n1 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.711 [ 00:17:17.711 { 00:17:17.711 "aliases": [ 00:17:17.711 "055ca75e-9616-4049-81e0-c4d4e420c4e5" 00:17:17.711 ], 00:17:17.711 "assigned_rate_limits": { 00:17:17.711 "r_mbytes_per_sec": 0, 00:17:17.711 "rw_ios_per_sec": 0, 00:17:17.711 "rw_mbytes_per_sec": 0, 00:17:17.711 "w_mbytes_per_sec": 0 00:17:17.711 }, 00:17:17.711 "block_size": 512, 00:17:17.711 "claimed": false, 00:17:17.711 "driver_specific": { 00:17:17.711 "mp_policy": "active_passive", 00:17:17.711 "nvme": [ 00:17:17.711 { 00:17:17.711 "ctrlr_data": { 00:17:17.711 "ana_reporting": false, 00:17:17.711 "cntlid": 3, 00:17:17.711 "firmware_revision": "24.09", 00:17:17.711 "model_number": "SPDK bdev Controller", 00:17:17.711 "multi_ctrlr": true, 00:17:17.711 "oacs": { 00:17:17.711 "firmware": 0, 00:17:17.711 "format": 0, 00:17:17.711 "ns_manage": 0, 00:17:17.711 "security": 0 00:17:17.711 }, 00:17:17.711 "serial_number": "00000000000000000000", 00:17:17.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:17.711 "vendor_id": "0x8086" 00:17:17.711 }, 00:17:17.711 "ns_data": { 00:17:17.711 "can_share": true, 00:17:17.711 "id": 1 00:17:17.711 }, 00:17:17.711 "trid": { 00:17:17.711 "adrfam": "IPv4", 00:17:17.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:17.711 "traddr": "10.0.0.2", 00:17:17.711 "trsvcid": "4421", 00:17:17.711 "trtype": "TCP" 00:17:17.711 }, 00:17:17.711 "vs": { 00:17:17.711 "nvme_version": "1.3" 00:17:17.711 } 00:17:17.711 } 00:17:17.711 ] 00:17:17.711 }, 00:17:17.711 "memory_domains": [ 00:17:17.711 { 00:17:17.711 "dma_device_id": "system", 00:17:17.711 "dma_device_type": 1 00:17:17.711 } 00:17:17.711 ], 00:17:17.711 "name": "nvme0n1", 00:17:17.711 "num_blocks": 2097152, 00:17:17.711 "product_name": "NVMe disk", 00:17:17.711 "supported_io_types": { 00:17:17.711 "abort": true, 00:17:17.711 "compare": true, 00:17:17.711 "compare_and_write": true, 00:17:17.711 "copy": true, 00:17:17.711 "flush": true, 00:17:17.711 "get_zone_info": false, 00:17:17.711 "nvme_admin": true, 00:17:17.711 "nvme_io": true, 00:17:17.711 "nvme_io_md": false, 00:17:17.711 "nvme_iov_md": false, 00:17:17.711 "read": true, 00:17:17.711 "reset": true, 00:17:17.711 "seek_data": false, 00:17:17.711 "seek_hole": false, 00:17:17.711 "unmap": false, 00:17:17.711 "write": true, 00:17:17.711 "write_zeroes": true, 00:17:17.711 "zcopy": false, 00:17:17.711 "zone_append": false, 00:17:17.711 "zone_management": false 00:17:17.711 }, 00:17:17.711 "uuid": "055ca75e-9616-4049-81e0-c4d4e420c4e5", 00:17:17.711 "zoned": false 00:17:17.711 } 00:17:17.711 ] 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.vaVKlIeuup 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:17.711 rmmod nvme_tcp 00:17:17.711 rmmod nvme_fabrics 00:17:17.711 rmmod nvme_keyring 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86455 ']' 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86455 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86455 ']' 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86455 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:17.711 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86455 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:17.969 killing process with pid 86455 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86455' 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86455 00:17:17.969 [2024-07-15 22:12:04.673616] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:17.969 [2024-07-15 22:12:04.673658] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86455 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:17.969 00:17:17.969 real 0m1.829s 00:17:17.969 user 0m1.520s 00:17:17.969 sys 0m0.508s 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:17.969 ************************************ 00:17:17.969 END TEST nvmf_async_init 00:17:17.969 22:12:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:17.969 ************************************ 00:17:17.969 22:12:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:17.969 22:12:04 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:17.969 22:12:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:17.969 22:12:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.969 22:12:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:17.969 ************************************ 00:17:17.969 START TEST dma 00:17:17.969 ************************************ 00:17:17.969 22:12:04 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:18.227 * Looking for test storage... 00:17:18.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:18.227 22:12:04 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:18.227 22:12:04 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:17:18.227 22:12:04 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.227 22:12:04 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.227 22:12:04 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.227 22:12:04 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.227 22:12:04 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.227 22:12:04 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.227 22:12:04 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.227 22:12:04 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.227 22:12:04 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.227 22:12:04 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:18.227 22:12:05 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.227 22:12:05 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.227 22:12:05 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.227 22:12:05 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.227 22:12:05 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.227 22:12:05 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.227 22:12:05 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:17:18.227 22:12:05 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.227 22:12:05 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.227 22:12:05 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:18.227 22:12:05 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:17:18.227 00:17:18.227 real 0m0.104s 00:17:18.227 user 0m0.053s 00:17:18.227 sys 0m0.058s 00:17:18.227 22:12:05 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:18.227 22:12:05 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:17:18.227 ************************************ 00:17:18.227 END TEST dma 00:17:18.227 ************************************ 00:17:18.227 22:12:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:18.227 22:12:05 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:18.227 22:12:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:18.227 22:12:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.227 22:12:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:18.227 ************************************ 00:17:18.227 START TEST nvmf_identify 00:17:18.227 ************************************ 00:17:18.227 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:18.227 * Looking for test storage... 00:17:18.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:18.227 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:18.227 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:18.227 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.227 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.227 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.227 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.227 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.227 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.227 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.227 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.227 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.227 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:18.228 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:18.485 Cannot find device "nvmf_tgt_br" 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.485 Cannot find device "nvmf_tgt_br2" 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:18.485 Cannot find device "nvmf_tgt_br" 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:18.485 Cannot find device "nvmf_tgt_br2" 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:18.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:18.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:18.485 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:18.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:17:18.742 00:17:18.742 --- 10.0.0.2 ping statistics --- 00:17:18.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.742 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:18.742 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:18.742 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:17:18.742 00:17:18.742 --- 10.0.0.3 ping statistics --- 00:17:18.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.742 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:18.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:17:18.742 00:17:18.742 --- 10.0.0.1 ping statistics --- 00:17:18.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.742 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:18.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86705 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86705 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 86705 ']' 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.742 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:18.742 [2024-07-15 22:12:05.592143] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:17:18.742 [2024-07-15 22:12:05.592231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.999 [2024-07-15 22:12:05.729051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:18.999 [2024-07-15 22:12:05.801912] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.999 [2024-07-15 22:12:05.801976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.999 [2024-07-15 22:12:05.801990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.999 [2024-07-15 22:12:05.802000] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.999 [2024-07-15 22:12:05.802009] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.999 [2024-07-15 22:12:05.802178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.999 [2024-07-15 22:12:05.802309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.999 [2024-07-15 22:12:05.802963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.999 [2024-07-15 22:12:05.802975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.999 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.999 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:17:18.999 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.999 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.999 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:18.999 [2024-07-15 22:12:05.898370] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.999 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.999 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:18.999 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:18.999 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:18.999 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:19.257 Malloc0 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:19.257 [2024-07-15 22:12:05.985656] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.257 22:12:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:19.257 [ 00:17:19.257 { 00:17:19.257 "allow_any_host": true, 00:17:19.257 "hosts": [], 00:17:19.257 "listen_addresses": [ 00:17:19.257 { 00:17:19.257 "adrfam": "IPv4", 00:17:19.257 "traddr": "10.0.0.2", 00:17:19.257 "trsvcid": "4420", 00:17:19.257 "trtype": "TCP" 00:17:19.257 } 00:17:19.257 ], 00:17:19.257 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:19.257 "subtype": "Discovery" 00:17:19.257 }, 00:17:19.257 { 00:17:19.257 "allow_any_host": true, 00:17:19.257 "hosts": [], 00:17:19.257 "listen_addresses": [ 00:17:19.257 { 00:17:19.257 "adrfam": "IPv4", 00:17:19.257 "traddr": "10.0.0.2", 00:17:19.257 "trsvcid": "4420", 00:17:19.257 "trtype": "TCP" 00:17:19.257 } 00:17:19.257 ], 00:17:19.257 "max_cntlid": 65519, 00:17:19.257 "max_namespaces": 32, 00:17:19.257 "min_cntlid": 1, 00:17:19.257 "model_number": "SPDK bdev Controller", 00:17:19.257 "namespaces": [ 00:17:19.257 { 00:17:19.257 "bdev_name": "Malloc0", 00:17:19.257 "eui64": "ABCDEF0123456789", 00:17:19.257 "name": "Malloc0", 00:17:19.257 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:19.257 "nsid": 1, 00:17:19.257 "uuid": "3f732880-1c77-43a0-97ef-0dbb6d4c4bf9" 00:17:19.257 } 00:17:19.257 ], 00:17:19.257 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.257 "serial_number": "SPDK00000000000001", 00:17:19.257 "subtype": "NVMe" 00:17:19.257 } 00:17:19.257 ] 00:17:19.257 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.257 22:12:06 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:19.257 [2024-07-15 22:12:06.035746] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:17:19.257 [2024-07-15 22:12:06.035803] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86746 ] 00:17:19.257 [2024-07-15 22:12:06.180425] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:19.257 [2024-07-15 22:12:06.180573] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:19.257 [2024-07-15 22:12:06.180590] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:19.257 [2024-07-15 22:12:06.180614] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:19.257 [2024-07-15 22:12:06.180627] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:19.257 [2024-07-15 22:12:06.180847] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:19.257 [2024-07-15 22:12:06.180945] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ef6a60 0 00:17:19.257 [2024-07-15 22:12:06.187129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:19.257 [2024-07-15 22:12:06.187173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:19.257 [2024-07-15 22:12:06.187185] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:19.257 [2024-07-15 22:12:06.187192] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:19.257 [2024-07-15 22:12:06.187264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.187279] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.187287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef6a60) 00:17:19.257 [2024-07-15 22:12:06.187309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:19.257 [2024-07-15 22:12:06.187358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39840, cid 0, qid 0 00:17:19.257 [2024-07-15 22:12:06.195120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.257 [2024-07-15 22:12:06.195160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.257 [2024-07-15 22:12:06.195172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.195183] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39840) on tqpair=0x1ef6a60 00:17:19.257 [2024-07-15 22:12:06.195207] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:19.257 [2024-07-15 22:12:06.195222] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:19.257 [2024-07-15 22:12:06.195233] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:19.257 [2024-07-15 22:12:06.195263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.195274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.195281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef6a60) 00:17:19.257 [2024-07-15 22:12:06.195299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.257 [2024-07-15 22:12:06.195349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39840, cid 0, qid 0 00:17:19.257 [2024-07-15 22:12:06.195461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.257 [2024-07-15 22:12:06.195486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.257 [2024-07-15 22:12:06.195497] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.195504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39840) on tqpair=0x1ef6a60 00:17:19.257 [2024-07-15 22:12:06.195516] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:19.257 [2024-07-15 22:12:06.195533] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:19.257 [2024-07-15 22:12:06.195549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.195558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.195565] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef6a60) 00:17:19.257 [2024-07-15 22:12:06.195578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.257 [2024-07-15 22:12:06.195618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39840, cid 0, qid 0 00:17:19.257 [2024-07-15 22:12:06.195713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.257 [2024-07-15 22:12:06.195728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.257 [2024-07-15 22:12:06.195736] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.195743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39840) on tqpair=0x1ef6a60 00:17:19.257 [2024-07-15 22:12:06.195754] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:19.257 [2024-07-15 22:12:06.195769] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:19.257 [2024-07-15 22:12:06.195783] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.195791] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.195798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef6a60) 00:17:19.257 [2024-07-15 22:12:06.195811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.257 [2024-07-15 22:12:06.195848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39840, cid 0, qid 0 00:17:19.257 [2024-07-15 22:12:06.195938] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.257 [2024-07-15 22:12:06.195957] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.257 [2024-07-15 22:12:06.195966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.195974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39840) on tqpair=0x1ef6a60 00:17:19.257 [2024-07-15 22:12:06.195984] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:19.257 [2024-07-15 22:12:06.196002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.196010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.196017] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef6a60) 00:17:19.257 [2024-07-15 22:12:06.196029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.257 [2024-07-15 22:12:06.196078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39840, cid 0, qid 0 00:17:19.257 [2024-07-15 22:12:06.196187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.257 [2024-07-15 22:12:06.196212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.257 [2024-07-15 22:12:06.196221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.196230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39840) on tqpair=0x1ef6a60 00:17:19.257 [2024-07-15 22:12:06.196244] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:19.257 [2024-07-15 22:12:06.196254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:19.257 [2024-07-15 22:12:06.196269] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:19.257 [2024-07-15 22:12:06.196380] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:19.257 [2024-07-15 22:12:06.196403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:19.257 [2024-07-15 22:12:06.196422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.196432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.196439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef6a60) 00:17:19.257 [2024-07-15 22:12:06.196453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.257 [2024-07-15 22:12:06.196493] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39840, cid 0, qid 0 00:17:19.257 [2024-07-15 22:12:06.196576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.257 [2024-07-15 22:12:06.196596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.257 [2024-07-15 22:12:06.196604] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.196612] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39840) on tqpair=0x1ef6a60 00:17:19.257 [2024-07-15 22:12:06.196622] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:19.257 [2024-07-15 22:12:06.196640] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.196649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.196657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef6a60) 00:17:19.257 [2024-07-15 22:12:06.196669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.257 [2024-07-15 22:12:06.196704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39840, cid 0, qid 0 00:17:19.257 [2024-07-15 22:12:06.196777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.257 [2024-07-15 22:12:06.196790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.257 [2024-07-15 22:12:06.196797] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.196805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39840) on tqpair=0x1ef6a60 00:17:19.257 [2024-07-15 22:12:06.196814] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:19.257 [2024-07-15 22:12:06.196823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:19.257 [2024-07-15 22:12:06.196838] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:19.257 [2024-07-15 22:12:06.196856] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:19.257 [2024-07-15 22:12:06.196874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.196883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef6a60) 00:17:19.257 [2024-07-15 22:12:06.196896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.257 [2024-07-15 22:12:06.196933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39840, cid 0, qid 0 00:17:19.257 [2024-07-15 22:12:06.197077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:19.257 [2024-07-15 22:12:06.197113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:19.257 [2024-07-15 22:12:06.197122] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.197129] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef6a60): datao=0, datal=4096, cccid=0 00:17:19.257 [2024-07-15 22:12:06.197138] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f39840) on tqpair(0x1ef6a60): expected_datao=0, payload_size=4096 00:17:19.257 [2024-07-15 22:12:06.197147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.197161] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.197169] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.197185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.257 [2024-07-15 22:12:06.197196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.257 [2024-07-15 22:12:06.197204] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.197211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39840) on tqpair=0x1ef6a60 00:17:19.257 [2024-07-15 22:12:06.197226] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:19.257 [2024-07-15 22:12:06.197235] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:19.257 [2024-07-15 22:12:06.197243] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:19.257 [2024-07-15 22:12:06.197253] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:19.257 [2024-07-15 22:12:06.197261] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:19.257 [2024-07-15 22:12:06.197270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:19.257 [2024-07-15 22:12:06.197288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:19.257 [2024-07-15 22:12:06.197303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.197312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.197318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef6a60) 00:17:19.257 [2024-07-15 22:12:06.197332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.257 [2024-07-15 22:12:06.197373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39840, cid 0, qid 0 00:17:19.257 [2024-07-15 22:12:06.197481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.257 [2024-07-15 22:12:06.197495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.257 [2024-07-15 22:12:06.197503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.197510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39840) on tqpair=0x1ef6a60 00:17:19.257 [2024-07-15 22:12:06.197524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.257 [2024-07-15 22:12:06.197532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.197539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ef6a60) 00:17:19.258 [2024-07-15 22:12:06.197551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.258 [2024-07-15 22:12:06.197562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.197570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.197577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ef6a60) 00:17:19.258 [2024-07-15 22:12:06.197587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.258 [2024-07-15 22:12:06.197597] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.197604] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.197611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ef6a60) 00:17:19.258 [2024-07-15 22:12:06.197621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.258 [2024-07-15 22:12:06.197632] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.197639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.197646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.258 [2024-07-15 22:12:06.197656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.258 [2024-07-15 22:12:06.197666] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:19.258 [2024-07-15 22:12:06.197689] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:19.258 [2024-07-15 22:12:06.197702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.197710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef6a60) 00:17:19.258 [2024-07-15 22:12:06.197722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.258 [2024-07-15 22:12:06.197762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39840, cid 0, qid 0 00:17:19.258 [2024-07-15 22:12:06.197775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f399c0, cid 1, qid 0 00:17:19.258 [2024-07-15 22:12:06.197783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39b40, cid 2, qid 0 00:17:19.258 [2024-07-15 22:12:06.197792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.258 [2024-07-15 22:12:06.197800] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39e40, cid 4, qid 0 00:17:19.258 [2024-07-15 22:12:06.197931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.258 [2024-07-15 22:12:06.197953] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.258 [2024-07-15 22:12:06.197963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.197971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39e40) on tqpair=0x1ef6a60 00:17:19.258 [2024-07-15 22:12:06.197980] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:19.258 [2024-07-15 22:12:06.197996] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:19.258 [2024-07-15 22:12:06.198019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.198029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef6a60) 00:17:19.258 [2024-07-15 22:12:06.198042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.258 [2024-07-15 22:12:06.198097] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39e40, cid 4, qid 0 00:17:19.258 [2024-07-15 22:12:06.198209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:19.258 [2024-07-15 22:12:06.198231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:19.258 [2024-07-15 22:12:06.198240] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.198247] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef6a60): datao=0, datal=4096, cccid=4 00:17:19.258 [2024-07-15 22:12:06.198255] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f39e40) on tqpair(0x1ef6a60): expected_datao=0, payload_size=4096 00:17:19.258 [2024-07-15 22:12:06.198263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.198276] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.198283] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.198298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.258 [2024-07-15 22:12:06.198309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.258 [2024-07-15 22:12:06.198315] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.198323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39e40) on tqpair=0x1ef6a60 00:17:19.258 [2024-07-15 22:12:06.198345] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:19.258 [2024-07-15 22:12:06.198409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.198427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef6a60) 00:17:19.258 [2024-07-15 22:12:06.198442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.258 [2024-07-15 22:12:06.198456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.198464] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.198471] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ef6a60) 00:17:19.258 [2024-07-15 22:12:06.198481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.258 [2024-07-15 22:12:06.198529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39e40, cid 4, qid 0 00:17:19.258 [2024-07-15 22:12:06.198543] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39fc0, cid 5, qid 0 00:17:19.258 [2024-07-15 22:12:06.198696] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:19.258 [2024-07-15 22:12:06.198723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:19.258 [2024-07-15 22:12:06.198732] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.198739] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef6a60): datao=0, datal=1024, cccid=4 00:17:19.258 [2024-07-15 22:12:06.198748] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f39e40) on tqpair(0x1ef6a60): expected_datao=0, payload_size=1024 00:17:19.258 [2024-07-15 22:12:06.198756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.198767] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.198774] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.198784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.258 [2024-07-15 22:12:06.198794] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.258 [2024-07-15 22:12:06.198801] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.258 [2024-07-15 22:12:06.198808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39fc0) on tqpair=0x1ef6a60 00:17:19.518 [2024-07-15 22:12:06.242231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.518 [2024-07-15 22:12:06.242289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.518 [2024-07-15 22:12:06.242302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.242313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39e40) on tqpair=0x1ef6a60 00:17:19.518 [2024-07-15 22:12:06.242350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.242362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef6a60) 00:17:19.518 [2024-07-15 22:12:06.242382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.518 [2024-07-15 22:12:06.242440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39e40, cid 4, qid 0 00:17:19.518 [2024-07-15 22:12:06.242620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:19.518 [2024-07-15 22:12:06.242646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:19.518 [2024-07-15 22:12:06.242655] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.242662] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef6a60): datao=0, datal=3072, cccid=4 00:17:19.518 [2024-07-15 22:12:06.242671] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f39e40) on tqpair(0x1ef6a60): expected_datao=0, payload_size=3072 00:17:19.518 [2024-07-15 22:12:06.242680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.242694] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.242702] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.242718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.518 [2024-07-15 22:12:06.242729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.518 [2024-07-15 22:12:06.242735] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.242743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39e40) on tqpair=0x1ef6a60 00:17:19.518 [2024-07-15 22:12:06.242763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.242772] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ef6a60) 00:17:19.518 [2024-07-15 22:12:06.242787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.518 [2024-07-15 22:12:06.242836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39e40, cid 4, qid 0 00:17:19.518 [2024-07-15 22:12:06.242957] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:19.518 [2024-07-15 22:12:06.242971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:19.518 [2024-07-15 22:12:06.242978] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.242985] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ef6a60): datao=0, datal=8, cccid=4 00:17:19.518 [2024-07-15 22:12:06.242994] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f39e40) on tqpair(0x1ef6a60): expected_datao=0, payload_size=8 00:17:19.518 [2024-07-15 22:12:06.243002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.243014] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.243022] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.283203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.518 [2024-07-15 22:12:06.283257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.518 [2024-07-15 22:12:06.283269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.283278] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39e40) on tqpair=0x1ef6a60 00:17:19.518 ===================================================== 00:17:19.518 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:19.518 ===================================================== 00:17:19.518 Controller Capabilities/Features 00:17:19.518 ================================ 00:17:19.518 Vendor ID: 0000 00:17:19.518 Subsystem Vendor ID: 0000 00:17:19.518 Serial Number: .................... 00:17:19.518 Model Number: ........................................ 00:17:19.518 Firmware Version: 24.09 00:17:19.518 Recommended Arb Burst: 0 00:17:19.518 IEEE OUI Identifier: 00 00 00 00:17:19.518 Multi-path I/O 00:17:19.518 May have multiple subsystem ports: No 00:17:19.518 May have multiple controllers: No 00:17:19.518 Associated with SR-IOV VF: No 00:17:19.518 Max Data Transfer Size: 131072 00:17:19.518 Max Number of Namespaces: 0 00:17:19.518 Max Number of I/O Queues: 1024 00:17:19.518 NVMe Specification Version (VS): 1.3 00:17:19.518 NVMe Specification Version (Identify): 1.3 00:17:19.518 Maximum Queue Entries: 128 00:17:19.518 Contiguous Queues Required: Yes 00:17:19.518 Arbitration Mechanisms Supported 00:17:19.518 Weighted Round Robin: Not Supported 00:17:19.518 Vendor Specific: Not Supported 00:17:19.518 Reset Timeout: 15000 ms 00:17:19.518 Doorbell Stride: 4 bytes 00:17:19.518 NVM Subsystem Reset: Not Supported 00:17:19.518 Command Sets Supported 00:17:19.518 NVM Command Set: Supported 00:17:19.518 Boot Partition: Not Supported 00:17:19.518 Memory Page Size Minimum: 4096 bytes 00:17:19.518 Memory Page Size Maximum: 4096 bytes 00:17:19.518 Persistent Memory Region: Not Supported 00:17:19.518 Optional Asynchronous Events Supported 00:17:19.518 Namespace Attribute Notices: Not Supported 00:17:19.518 Firmware Activation Notices: Not Supported 00:17:19.518 ANA Change Notices: Not Supported 00:17:19.518 PLE Aggregate Log Change Notices: Not Supported 00:17:19.518 LBA Status Info Alert Notices: Not Supported 00:17:19.518 EGE Aggregate Log Change Notices: Not Supported 00:17:19.518 Normal NVM Subsystem Shutdown event: Not Supported 00:17:19.518 Zone Descriptor Change Notices: Not Supported 00:17:19.518 Discovery Log Change Notices: Supported 00:17:19.518 Controller Attributes 00:17:19.518 128-bit Host Identifier: Not Supported 00:17:19.518 Non-Operational Permissive Mode: Not Supported 00:17:19.518 NVM Sets: Not Supported 00:17:19.518 Read Recovery Levels: Not Supported 00:17:19.518 Endurance Groups: Not Supported 00:17:19.518 Predictable Latency Mode: Not Supported 00:17:19.518 Traffic Based Keep ALive: Not Supported 00:17:19.518 Namespace Granularity: Not Supported 00:17:19.518 SQ Associations: Not Supported 00:17:19.518 UUID List: Not Supported 00:17:19.518 Multi-Domain Subsystem: Not Supported 00:17:19.518 Fixed Capacity Management: Not Supported 00:17:19.518 Variable Capacity Management: Not Supported 00:17:19.518 Delete Endurance Group: Not Supported 00:17:19.518 Delete NVM Set: Not Supported 00:17:19.518 Extended LBA Formats Supported: Not Supported 00:17:19.518 Flexible Data Placement Supported: Not Supported 00:17:19.518 00:17:19.518 Controller Memory Buffer Support 00:17:19.518 ================================ 00:17:19.518 Supported: No 00:17:19.518 00:17:19.518 Persistent Memory Region Support 00:17:19.518 ================================ 00:17:19.518 Supported: No 00:17:19.518 00:17:19.518 Admin Command Set Attributes 00:17:19.518 ============================ 00:17:19.518 Security Send/Receive: Not Supported 00:17:19.518 Format NVM: Not Supported 00:17:19.518 Firmware Activate/Download: Not Supported 00:17:19.518 Namespace Management: Not Supported 00:17:19.518 Device Self-Test: Not Supported 00:17:19.518 Directives: Not Supported 00:17:19.518 NVMe-MI: Not Supported 00:17:19.518 Virtualization Management: Not Supported 00:17:19.518 Doorbell Buffer Config: Not Supported 00:17:19.518 Get LBA Status Capability: Not Supported 00:17:19.518 Command & Feature Lockdown Capability: Not Supported 00:17:19.518 Abort Command Limit: 1 00:17:19.518 Async Event Request Limit: 4 00:17:19.518 Number of Firmware Slots: N/A 00:17:19.518 Firmware Slot 1 Read-Only: N/A 00:17:19.518 Firmware Activation Without Reset: N/A 00:17:19.518 Multiple Update Detection Support: N/A 00:17:19.518 Firmware Update Granularity: No Information Provided 00:17:19.518 Per-Namespace SMART Log: No 00:17:19.518 Asymmetric Namespace Access Log Page: Not Supported 00:17:19.518 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:19.518 Command Effects Log Page: Not Supported 00:17:19.518 Get Log Page Extended Data: Supported 00:17:19.518 Telemetry Log Pages: Not Supported 00:17:19.518 Persistent Event Log Pages: Not Supported 00:17:19.518 Supported Log Pages Log Page: May Support 00:17:19.518 Commands Supported & Effects Log Page: Not Supported 00:17:19.518 Feature Identifiers & Effects Log Page:May Support 00:17:19.518 NVMe-MI Commands & Effects Log Page: May Support 00:17:19.518 Data Area 4 for Telemetry Log: Not Supported 00:17:19.518 Error Log Page Entries Supported: 128 00:17:19.518 Keep Alive: Not Supported 00:17:19.518 00:17:19.518 NVM Command Set Attributes 00:17:19.518 ========================== 00:17:19.518 Submission Queue Entry Size 00:17:19.518 Max: 1 00:17:19.518 Min: 1 00:17:19.518 Completion Queue Entry Size 00:17:19.518 Max: 1 00:17:19.518 Min: 1 00:17:19.518 Number of Namespaces: 0 00:17:19.518 Compare Command: Not Supported 00:17:19.518 Write Uncorrectable Command: Not Supported 00:17:19.518 Dataset Management Command: Not Supported 00:17:19.518 Write Zeroes Command: Not Supported 00:17:19.518 Set Features Save Field: Not Supported 00:17:19.518 Reservations: Not Supported 00:17:19.518 Timestamp: Not Supported 00:17:19.518 Copy: Not Supported 00:17:19.518 Volatile Write Cache: Not Present 00:17:19.518 Atomic Write Unit (Normal): 1 00:17:19.518 Atomic Write Unit (PFail): 1 00:17:19.518 Atomic Compare & Write Unit: 1 00:17:19.518 Fused Compare & Write: Supported 00:17:19.518 Scatter-Gather List 00:17:19.518 SGL Command Set: Supported 00:17:19.518 SGL Keyed: Supported 00:17:19.518 SGL Bit Bucket Descriptor: Not Supported 00:17:19.518 SGL Metadata Pointer: Not Supported 00:17:19.518 Oversized SGL: Not Supported 00:17:19.518 SGL Metadata Address: Not Supported 00:17:19.518 SGL Offset: Supported 00:17:19.518 Transport SGL Data Block: Not Supported 00:17:19.518 Replay Protected Memory Block: Not Supported 00:17:19.518 00:17:19.518 Firmware Slot Information 00:17:19.518 ========================= 00:17:19.518 Active slot: 0 00:17:19.518 00:17:19.518 00:17:19.518 Error Log 00:17:19.518 ========= 00:17:19.518 00:17:19.518 Active Namespaces 00:17:19.518 ================= 00:17:19.518 Discovery Log Page 00:17:19.518 ================== 00:17:19.518 Generation Counter: 2 00:17:19.518 Number of Records: 2 00:17:19.518 Record Format: 0 00:17:19.518 00:17:19.518 Discovery Log Entry 0 00:17:19.518 ---------------------- 00:17:19.518 Transport Type: 3 (TCP) 00:17:19.518 Address Family: 1 (IPv4) 00:17:19.518 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:19.518 Entry Flags: 00:17:19.518 Duplicate Returned Information: 1 00:17:19.518 Explicit Persistent Connection Support for Discovery: 1 00:17:19.518 Transport Requirements: 00:17:19.518 Secure Channel: Not Required 00:17:19.518 Port ID: 0 (0x0000) 00:17:19.518 Controller ID: 65535 (0xffff) 00:17:19.518 Admin Max SQ Size: 128 00:17:19.518 Transport Service Identifier: 4420 00:17:19.518 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:19.518 Transport Address: 10.0.0.2 00:17:19.518 Discovery Log Entry 1 00:17:19.518 ---------------------- 00:17:19.518 Transport Type: 3 (TCP) 00:17:19.518 Address Family: 1 (IPv4) 00:17:19.518 Subsystem Type: 2 (NVM Subsystem) 00:17:19.518 Entry Flags: 00:17:19.518 Duplicate Returned Information: 0 00:17:19.518 Explicit Persistent Connection Support for Discovery: 0 00:17:19.518 Transport Requirements: 00:17:19.518 Secure Channel: Not Required 00:17:19.518 Port ID: 0 (0x0000) 00:17:19.518 Controller ID: 65535 (0xffff) 00:17:19.518 Admin Max SQ Size: 128 00:17:19.518 Transport Service Identifier: 4420 00:17:19.518 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:19.518 Transport Address: 10.0.0.2 [2024-07-15 22:12:06.283491] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:19.518 [2024-07-15 22:12:06.283519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39840) on tqpair=0x1ef6a60 00:17:19.518 [2024-07-15 22:12:06.283533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.518 [2024-07-15 22:12:06.283544] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f399c0) on tqpair=0x1ef6a60 00:17:19.518 [2024-07-15 22:12:06.283553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.518 [2024-07-15 22:12:06.283562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39b40) on tqpair=0x1ef6a60 00:17:19.518 [2024-07-15 22:12:06.283570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.518 [2024-07-15 22:12:06.283579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.518 [2024-07-15 22:12:06.283587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.518 [2024-07-15 22:12:06.283607] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.283616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.283623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.518 [2024-07-15 22:12:06.283639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.518 [2024-07-15 22:12:06.283683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.518 [2024-07-15 22:12:06.283807] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.518 [2024-07-15 22:12:06.283837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.518 [2024-07-15 22:12:06.283846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.283855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.518 [2024-07-15 22:12:06.283870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.283879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.518 [2024-07-15 22:12:06.283886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.518 [2024-07-15 22:12:06.283900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.518 [2024-07-15 22:12:06.283946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.518 [2024-07-15 22:12:06.284075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.284119] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.284130] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.284138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.284149] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:19.519 [2024-07-15 22:12:06.284158] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:19.519 [2024-07-15 22:12:06.284180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.284191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.284198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.284212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.284253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.284349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.284377] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.284386] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.284394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.284416] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.284425] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.284432] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.284446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.284485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.284576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.284590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.284597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.284604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.284623] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.284631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.284638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.284651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.284686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.284775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.284797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.284806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.284813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.284832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.284841] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.284849] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.284862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.284898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.284989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.285015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.285024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.285052] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285061] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.285102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.285142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.285234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.285254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.285262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285270] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.285290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.285320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.285357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.285443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.285462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.285471] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.285498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.285527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.285564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.285656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.285670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.285677] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.285702] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.285730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.285767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.285858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.285878] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.285886] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.285913] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285923] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.285930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.285943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.285980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.286071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.286112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.286122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.286130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.286151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.286168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.286177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.286190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.286234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.286330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.286352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.286361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.286369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.286389] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.286398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.286405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.286419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.286456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.286536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.286551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.286559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.286566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.286586] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.286595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.286602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.286615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.286654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.286745] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.286769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.286777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.286785] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.286806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.286817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.286824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.286837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.286876] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.286970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.286985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.286992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.287000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.287020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.287030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.287037] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.287050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.291105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.291153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.291169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.291177] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.291185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.291210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.291221] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.291228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ef6a60) 00:17:19.519 [2024-07-15 22:12:06.291243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.519 [2024-07-15 22:12:06.291291] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f39cc0, cid 3, qid 0 00:17:19.519 [2024-07-15 22:12:06.291390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.519 [2024-07-15 22:12:06.291419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.519 [2024-07-15 22:12:06.291431] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.519 [2024-07-15 22:12:06.291440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f39cc0) on tqpair=0x1ef6a60 00:17:19.519 [2024-07-15 22:12:06.291458] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:17:19.519 00:17:19.519 22:12:06 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:19.519 [2024-07-15 22:12:06.341974] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:17:19.519 [2024-07-15 22:12:06.342056] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86748 ] 00:17:19.780 [2024-07-15 22:12:06.493039] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:19.780 [2024-07-15 22:12:06.493144] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:19.780 [2024-07-15 22:12:06.493154] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:19.780 [2024-07-15 22:12:06.493171] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:19.780 [2024-07-15 22:12:06.493180] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:19.780 [2024-07-15 22:12:06.493350] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:19.780 [2024-07-15 22:12:06.493414] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20c1a60 0 00:17:19.780 [2024-07-15 22:12:06.499113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:19.780 [2024-07-15 22:12:06.499144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:19.780 [2024-07-15 22:12:06.499152] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:19.780 [2024-07-15 22:12:06.499157] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:19.780 [2024-07-15 22:12:06.499216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.499225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.499230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c1a60) 00:17:19.780 [2024-07-15 22:12:06.499248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:19.780 [2024-07-15 22:12:06.499286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104840, cid 0, qid 0 00:17:19.780 [2024-07-15 22:12:06.507108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.780 [2024-07-15 22:12:06.507136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.780 [2024-07-15 22:12:06.507143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.507150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104840) on tqpair=0x20c1a60 00:17:19.780 [2024-07-15 22:12:06.507168] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:19.780 [2024-07-15 22:12:06.507179] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:19.780 [2024-07-15 22:12:06.507187] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:19.780 [2024-07-15 22:12:06.507209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.507216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.507221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c1a60) 00:17:19.780 [2024-07-15 22:12:06.507233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.780 [2024-07-15 22:12:06.507269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104840, cid 0, qid 0 00:17:19.780 [2024-07-15 22:12:06.507337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.780 [2024-07-15 22:12:06.507347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.780 [2024-07-15 22:12:06.507352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.507357] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104840) on tqpair=0x20c1a60 00:17:19.780 [2024-07-15 22:12:06.507365] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:19.780 [2024-07-15 22:12:06.507376] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:19.780 [2024-07-15 22:12:06.507386] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.507391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.507396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c1a60) 00:17:19.780 [2024-07-15 22:12:06.507406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.780 [2024-07-15 22:12:06.507431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104840, cid 0, qid 0 00:17:19.780 [2024-07-15 22:12:06.507489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.780 [2024-07-15 22:12:06.507498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.780 [2024-07-15 22:12:06.507503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.507508] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104840) on tqpair=0x20c1a60 00:17:19.780 [2024-07-15 22:12:06.507516] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:19.780 [2024-07-15 22:12:06.507528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:19.780 [2024-07-15 22:12:06.507537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.507543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.507548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c1a60) 00:17:19.780 [2024-07-15 22:12:06.507557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.780 [2024-07-15 22:12:06.507580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104840, cid 0, qid 0 00:17:19.780 [2024-07-15 22:12:06.507635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.780 [2024-07-15 22:12:06.507643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.780 [2024-07-15 22:12:06.507648] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.507653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104840) on tqpair=0x20c1a60 00:17:19.780 [2024-07-15 22:12:06.507661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:19.780 [2024-07-15 22:12:06.507674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.507681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.507686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c1a60) 00:17:19.780 [2024-07-15 22:12:06.507695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.780 [2024-07-15 22:12:06.507717] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104840, cid 0, qid 0 00:17:19.780 [2024-07-15 22:12:06.507772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.780 [2024-07-15 22:12:06.507780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.780 [2024-07-15 22:12:06.507785] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.780 [2024-07-15 22:12:06.507791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104840) on tqpair=0x20c1a60 00:17:19.780 [2024-07-15 22:12:06.507798] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:19.780 [2024-07-15 22:12:06.507805] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:19.780 [2024-07-15 22:12:06.507815] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:19.780 [2024-07-15 22:12:06.507923] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:19.781 [2024-07-15 22:12:06.507929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:19.781 [2024-07-15 22:12:06.507941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.507946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.507951] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.507961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.781 [2024-07-15 22:12:06.507985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104840, cid 0, qid 0 00:17:19.781 [2024-07-15 22:12:06.508041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.781 [2024-07-15 22:12:06.508050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.781 [2024-07-15 22:12:06.508055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104840) on tqpair=0x20c1a60 00:17:19.781 [2024-07-15 22:12:06.508079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:19.781 [2024-07-15 22:12:06.508109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508121] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.508131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.781 [2024-07-15 22:12:06.508157] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104840, cid 0, qid 0 00:17:19.781 [2024-07-15 22:12:06.508227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.781 [2024-07-15 22:12:06.508238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.781 [2024-07-15 22:12:06.508243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104840) on tqpair=0x20c1a60 00:17:19.781 [2024-07-15 22:12:06.508255] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:19.781 [2024-07-15 22:12:06.508262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.508273] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:19.781 [2024-07-15 22:12:06.508286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.508300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.508316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.781 [2024-07-15 22:12:06.508341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104840, cid 0, qid 0 00:17:19.781 [2024-07-15 22:12:06.508440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:19.781 [2024-07-15 22:12:06.508460] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:19.781 [2024-07-15 22:12:06.508466] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508472] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c1a60): datao=0, datal=4096, cccid=0 00:17:19.781 [2024-07-15 22:12:06.508479] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2104840) on tqpair(0x20c1a60): expected_datao=0, payload_size=4096 00:17:19.781 [2024-07-15 22:12:06.508485] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508496] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508502] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.781 [2024-07-15 22:12:06.508521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.781 [2024-07-15 22:12:06.508526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104840) on tqpair=0x20c1a60 00:17:19.781 [2024-07-15 22:12:06.508543] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:19.781 [2024-07-15 22:12:06.508551] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:19.781 [2024-07-15 22:12:06.508557] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:19.781 [2024-07-15 22:12:06.508563] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:19.781 [2024-07-15 22:12:06.508569] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:19.781 [2024-07-15 22:12:06.508576] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.508589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.508599] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.508620] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.781 [2024-07-15 22:12:06.508647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104840, cid 0, qid 0 00:17:19.781 [2024-07-15 22:12:06.508711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.781 [2024-07-15 22:12:06.508719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.781 [2024-07-15 22:12:06.508724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508729] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104840) on tqpair=0x20c1a60 00:17:19.781 [2024-07-15 22:12:06.508739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.508759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.781 [2024-07-15 22:12:06.508767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508773] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508777] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.508785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.781 [2024-07-15 22:12:06.508794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.508811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.781 [2024-07-15 22:12:06.508819] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.508837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.781 [2024-07-15 22:12:06.508844] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.508861] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.508871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.508876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.508885] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.781 [2024-07-15 22:12:06.508911] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104840, cid 0, qid 0 00:17:19.781 [2024-07-15 22:12:06.508920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21049c0, cid 1, qid 0 00:17:19.781 [2024-07-15 22:12:06.508927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104b40, cid 2, qid 0 00:17:19.781 [2024-07-15 22:12:06.508933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.781 [2024-07-15 22:12:06.508939] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104e40, cid 4, qid 0 00:17:19.781 [2024-07-15 22:12:06.509031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.781 [2024-07-15 22:12:06.509050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.781 [2024-07-15 22:12:06.509056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104e40) on tqpair=0x20c1a60 00:17:19.781 [2024-07-15 22:12:06.509069] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:19.781 [2024-07-15 22:12:06.509094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.509109] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.509118] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.509127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509133] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509138] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.509148] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.781 [2024-07-15 22:12:06.509176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104e40, cid 4, qid 0 00:17:19.781 [2024-07-15 22:12:06.509242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.781 [2024-07-15 22:12:06.509257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.781 [2024-07-15 22:12:06.509263] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104e40) on tqpair=0x20c1a60 00:17:19.781 [2024-07-15 22:12:06.509354] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.509370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.509381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.509396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.781 [2024-07-15 22:12:06.509423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104e40, cid 4, qid 0 00:17:19.781 [2024-07-15 22:12:06.509495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:19.781 [2024-07-15 22:12:06.509505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:19.781 [2024-07-15 22:12:06.509510] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509516] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c1a60): datao=0, datal=4096, cccid=4 00:17:19.781 [2024-07-15 22:12:06.509522] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2104e40) on tqpair(0x20c1a60): expected_datao=0, payload_size=4096 00:17:19.781 [2024-07-15 22:12:06.509528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509538] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509543] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.781 [2024-07-15 22:12:06.509562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.781 [2024-07-15 22:12:06.509567] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104e40) on tqpair=0x20c1a60 00:17:19.781 [2024-07-15 22:12:06.509591] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:19.781 [2024-07-15 22:12:06.509605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.509619] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.509629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.509644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.781 [2024-07-15 22:12:06.509669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104e40, cid 4, qid 0 00:17:19.781 [2024-07-15 22:12:06.509747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:19.781 [2024-07-15 22:12:06.509762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:19.781 [2024-07-15 22:12:06.509768] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509773] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c1a60): datao=0, datal=4096, cccid=4 00:17:19.781 [2024-07-15 22:12:06.509779] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2104e40) on tqpair(0x20c1a60): expected_datao=0, payload_size=4096 00:17:19.781 [2024-07-15 22:12:06.509785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509794] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509800] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.781 [2024-07-15 22:12:06.509819] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.781 [2024-07-15 22:12:06.509824] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104e40) on tqpair=0x20c1a60 00:17:19.781 [2024-07-15 22:12:06.509848] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.509863] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.509874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.509880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.509889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.781 [2024-07-15 22:12:06.509916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104e40, cid 4, qid 0 00:17:19.781 [2024-07-15 22:12:06.509981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:19.781 [2024-07-15 22:12:06.509999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:19.781 [2024-07-15 22:12:06.510005] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.510010] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c1a60): datao=0, datal=4096, cccid=4 00:17:19.781 [2024-07-15 22:12:06.510017] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2104e40) on tqpair(0x20c1a60): expected_datao=0, payload_size=4096 00:17:19.781 [2024-07-15 22:12:06.510023] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.510032] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.510037] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.510049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.781 [2024-07-15 22:12:06.510057] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.781 [2024-07-15 22:12:06.510062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.510067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104e40) on tqpair=0x20c1a60 00:17:19.781 [2024-07-15 22:12:06.510079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.510109] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.510124] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.510134] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.510141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.510148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.510155] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:19.781 [2024-07-15 22:12:06.510161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:19.781 [2024-07-15 22:12:06.510169] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:19.781 [2024-07-15 22:12:06.510190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.510196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.510207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.781 [2024-07-15 22:12:06.510216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.510222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.781 [2024-07-15 22:12:06.510227] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20c1a60) 00:17:19.781 [2024-07-15 22:12:06.510235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.781 [2024-07-15 22:12:06.510268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104e40, cid 4, qid 0 00:17:19.781 [2024-07-15 22:12:06.510284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104fc0, cid 5, qid 0 00:17:19.781 [2024-07-15 22:12:06.510355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.781 [2024-07-15 22:12:06.510364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.782 [2024-07-15 22:12:06.510369] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.510374] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104e40) on tqpair=0x20c1a60 00:17:19.782 [2024-07-15 22:12:06.510384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.782 [2024-07-15 22:12:06.510392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.782 [2024-07-15 22:12:06.510397] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.510402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104fc0) on tqpair=0x20c1a60 00:17:19.782 [2024-07-15 22:12:06.510415] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.510422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20c1a60) 00:17:19.782 [2024-07-15 22:12:06.510431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.782 [2024-07-15 22:12:06.510455] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104fc0, cid 5, qid 0 00:17:19.782 [2024-07-15 22:12:06.510514] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.782 [2024-07-15 22:12:06.510522] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.782 [2024-07-15 22:12:06.510527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.510532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104fc0) on tqpair=0x20c1a60 00:17:19.782 [2024-07-15 22:12:06.510546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.510552] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20c1a60) 00:17:19.782 [2024-07-15 22:12:06.510561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.782 [2024-07-15 22:12:06.510583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104fc0, cid 5, qid 0 00:17:19.782 [2024-07-15 22:12:06.510641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.782 [2024-07-15 22:12:06.510649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.782 [2024-07-15 22:12:06.510654] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.510660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104fc0) on tqpair=0x20c1a60 00:17:19.782 [2024-07-15 22:12:06.510673] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.510679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20c1a60) 00:17:19.782 [2024-07-15 22:12:06.510688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.782 [2024-07-15 22:12:06.510709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104fc0, cid 5, qid 0 00:17:19.782 [2024-07-15 22:12:06.510763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.782 [2024-07-15 22:12:06.510772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.782 [2024-07-15 22:12:06.510777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.510782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104fc0) on tqpair=0x20c1a60 00:17:19.782 [2024-07-15 22:12:06.510805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.510818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20c1a60) 00:17:19.782 [2024-07-15 22:12:06.510828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.782 [2024-07-15 22:12:06.510839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.510844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20c1a60) 00:17:19.782 [2024-07-15 22:12:06.510853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.782 [2024-07-15 22:12:06.510862] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.510868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x20c1a60) 00:17:19.782 [2024-07-15 22:12:06.510876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.782 [2024-07-15 22:12:06.510890] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.510896] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x20c1a60) 00:17:19.782 [2024-07-15 22:12:06.510904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.782 [2024-07-15 22:12:06.510930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104fc0, cid 5, qid 0 00:17:19.782 [2024-07-15 22:12:06.510940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104e40, cid 4, qid 0 00:17:19.782 [2024-07-15 22:12:06.510946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2105140, cid 6, qid 0 00:17:19.782 [2024-07-15 22:12:06.510953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21052c0, cid 7, qid 0 00:17:19.782 [2024-07-15 22:12:06.515112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:19.782 [2024-07-15 22:12:06.515140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:19.782 [2024-07-15 22:12:06.515147] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515152] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c1a60): datao=0, datal=8192, cccid=5 00:17:19.782 [2024-07-15 22:12:06.515159] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2104fc0) on tqpair(0x20c1a60): expected_datao=0, payload_size=8192 00:17:19.782 [2024-07-15 22:12:06.515165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515176] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515182] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:19.782 [2024-07-15 22:12:06.515198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:19.782 [2024-07-15 22:12:06.515202] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515207] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c1a60): datao=0, datal=512, cccid=4 00:17:19.782 [2024-07-15 22:12:06.515214] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2104e40) on tqpair(0x20c1a60): expected_datao=0, payload_size=512 00:17:19.782 [2024-07-15 22:12:06.515219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515228] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515233] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:19.782 [2024-07-15 22:12:06.515248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:19.782 [2024-07-15 22:12:06.515253] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515257] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c1a60): datao=0, datal=512, cccid=6 00:17:19.782 [2024-07-15 22:12:06.515264] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2105140) on tqpair(0x20c1a60): expected_datao=0, payload_size=512 00:17:19.782 [2024-07-15 22:12:06.515269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515278] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515283] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:19.782 [2024-07-15 22:12:06.515298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:19.782 [2024-07-15 22:12:06.515302] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515307] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20c1a60): datao=0, datal=4096, cccid=7 00:17:19.782 [2024-07-15 22:12:06.515313] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21052c0) on tqpair(0x20c1a60): expected_datao=0, payload_size=4096 00:17:19.782 [2024-07-15 22:12:06.515319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515328] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515333] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.782 [2024-07-15 22:12:06.515348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.782 [2024-07-15 22:12:06.515353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104fc0) on tqpair=0x20c1a60 00:17:19.782 [2024-07-15 22:12:06.515385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.782 [2024-07-15 22:12:06.515394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.782 [2024-07-15 22:12:06.515399] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104e40) on tqpair=0x20c1a60 00:17:19.782 [2024-07-15 22:12:06.515420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.782 [2024-07-15 22:12:06.515428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.782 [2024-07-15 22:12:06.515433] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2105140) on tqpair=0x20c1a60 00:17:19.782 [2024-07-15 22:12:06.515448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.782 [2024-07-15 22:12:06.515456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.782 [2024-07-15 22:12:06.515460] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.782 [2024-07-15 22:12:06.515466] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21052c0) on tqpair=0x20c1a60 00:17:19.782 ===================================================== 00:17:19.782 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:19.782 ===================================================== 00:17:19.782 Controller Capabilities/Features 00:17:19.782 ================================ 00:17:19.782 Vendor ID: 8086 00:17:19.782 Subsystem Vendor ID: 8086 00:17:19.782 Serial Number: SPDK00000000000001 00:17:19.782 Model Number: SPDK bdev Controller 00:17:19.782 Firmware Version: 24.09 00:17:19.782 Recommended Arb Burst: 6 00:17:19.782 IEEE OUI Identifier: e4 d2 5c 00:17:19.782 Multi-path I/O 00:17:19.782 May have multiple subsystem ports: Yes 00:17:19.782 May have multiple controllers: Yes 00:17:19.782 Associated with SR-IOV VF: No 00:17:19.782 Max Data Transfer Size: 131072 00:17:19.782 Max Number of Namespaces: 32 00:17:19.782 Max Number of I/O Queues: 127 00:17:19.782 NVMe Specification Version (VS): 1.3 00:17:19.782 NVMe Specification Version (Identify): 1.3 00:17:19.782 Maximum Queue Entries: 128 00:17:19.782 Contiguous Queues Required: Yes 00:17:19.782 Arbitration Mechanisms Supported 00:17:19.782 Weighted Round Robin: Not Supported 00:17:19.782 Vendor Specific: Not Supported 00:17:19.782 Reset Timeout: 15000 ms 00:17:19.782 Doorbell Stride: 4 bytes 00:17:19.782 NVM Subsystem Reset: Not Supported 00:17:19.782 Command Sets Supported 00:17:19.782 NVM Command Set: Supported 00:17:19.782 Boot Partition: Not Supported 00:17:19.782 Memory Page Size Minimum: 4096 bytes 00:17:19.782 Memory Page Size Maximum: 4096 bytes 00:17:19.782 Persistent Memory Region: Not Supported 00:17:19.782 Optional Asynchronous Events Supported 00:17:19.782 Namespace Attribute Notices: Supported 00:17:19.782 Firmware Activation Notices: Not Supported 00:17:19.782 ANA Change Notices: Not Supported 00:17:19.782 PLE Aggregate Log Change Notices: Not Supported 00:17:19.782 LBA Status Info Alert Notices: Not Supported 00:17:19.782 EGE Aggregate Log Change Notices: Not Supported 00:17:19.782 Normal NVM Subsystem Shutdown event: Not Supported 00:17:19.782 Zone Descriptor Change Notices: Not Supported 00:17:19.782 Discovery Log Change Notices: Not Supported 00:17:19.782 Controller Attributes 00:17:19.782 128-bit Host Identifier: Supported 00:17:19.782 Non-Operational Permissive Mode: Not Supported 00:17:19.782 NVM Sets: Not Supported 00:17:19.782 Read Recovery Levels: Not Supported 00:17:19.782 Endurance Groups: Not Supported 00:17:19.782 Predictable Latency Mode: Not Supported 00:17:19.782 Traffic Based Keep ALive: Not Supported 00:17:19.782 Namespace Granularity: Not Supported 00:17:19.782 SQ Associations: Not Supported 00:17:19.782 UUID List: Not Supported 00:17:19.782 Multi-Domain Subsystem: Not Supported 00:17:19.782 Fixed Capacity Management: Not Supported 00:17:19.782 Variable Capacity Management: Not Supported 00:17:19.782 Delete Endurance Group: Not Supported 00:17:19.782 Delete NVM Set: Not Supported 00:17:19.782 Extended LBA Formats Supported: Not Supported 00:17:19.782 Flexible Data Placement Supported: Not Supported 00:17:19.782 00:17:19.782 Controller Memory Buffer Support 00:17:19.782 ================================ 00:17:19.782 Supported: No 00:17:19.782 00:17:19.782 Persistent Memory Region Support 00:17:19.782 ================================ 00:17:19.782 Supported: No 00:17:19.782 00:17:19.782 Admin Command Set Attributes 00:17:19.782 ============================ 00:17:19.782 Security Send/Receive: Not Supported 00:17:19.782 Format NVM: Not Supported 00:17:19.782 Firmware Activate/Download: Not Supported 00:17:19.782 Namespace Management: Not Supported 00:17:19.782 Device Self-Test: Not Supported 00:17:19.782 Directives: Not Supported 00:17:19.782 NVMe-MI: Not Supported 00:17:19.782 Virtualization Management: Not Supported 00:17:19.782 Doorbell Buffer Config: Not Supported 00:17:19.782 Get LBA Status Capability: Not Supported 00:17:19.782 Command & Feature Lockdown Capability: Not Supported 00:17:19.782 Abort Command Limit: 4 00:17:19.782 Async Event Request Limit: 4 00:17:19.782 Number of Firmware Slots: N/A 00:17:19.782 Firmware Slot 1 Read-Only: N/A 00:17:19.782 Firmware Activation Without Reset: N/A 00:17:19.782 Multiple Update Detection Support: N/A 00:17:19.782 Firmware Update Granularity: No Information Provided 00:17:19.782 Per-Namespace SMART Log: No 00:17:19.782 Asymmetric Namespace Access Log Page: Not Supported 00:17:19.782 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:19.782 Command Effects Log Page: Supported 00:17:19.782 Get Log Page Extended Data: Supported 00:17:19.782 Telemetry Log Pages: Not Supported 00:17:19.782 Persistent Event Log Pages: Not Supported 00:17:19.782 Supported Log Pages Log Page: May Support 00:17:19.782 Commands Supported & Effects Log Page: Not Supported 00:17:19.782 Feature Identifiers & Effects Log Page:May Support 00:17:19.782 NVMe-MI Commands & Effects Log Page: May Support 00:17:19.782 Data Area 4 for Telemetry Log: Not Supported 00:17:19.782 Error Log Page Entries Supported: 128 00:17:19.782 Keep Alive: Supported 00:17:19.782 Keep Alive Granularity: 10000 ms 00:17:19.782 00:17:19.782 NVM Command Set Attributes 00:17:19.782 ========================== 00:17:19.782 Submission Queue Entry Size 00:17:19.782 Max: 64 00:17:19.782 Min: 64 00:17:19.782 Completion Queue Entry Size 00:17:19.782 Max: 16 00:17:19.782 Min: 16 00:17:19.782 Number of Namespaces: 32 00:17:19.782 Compare Command: Supported 00:17:19.782 Write Uncorrectable Command: Not Supported 00:17:19.782 Dataset Management Command: Supported 00:17:19.782 Write Zeroes Command: Supported 00:17:19.782 Set Features Save Field: Not Supported 00:17:19.782 Reservations: Supported 00:17:19.782 Timestamp: Not Supported 00:17:19.782 Copy: Supported 00:17:19.782 Volatile Write Cache: Present 00:17:19.782 Atomic Write Unit (Normal): 1 00:17:19.782 Atomic Write Unit (PFail): 1 00:17:19.782 Atomic Compare & Write Unit: 1 00:17:19.782 Fused Compare & Write: Supported 00:17:19.782 Scatter-Gather List 00:17:19.782 SGL Command Set: Supported 00:17:19.782 SGL Keyed: Supported 00:17:19.782 SGL Bit Bucket Descriptor: Not Supported 00:17:19.782 SGL Metadata Pointer: Not Supported 00:17:19.782 Oversized SGL: Not Supported 00:17:19.782 SGL Metadata Address: Not Supported 00:17:19.782 SGL Offset: Supported 00:17:19.782 Transport SGL Data Block: Not Supported 00:17:19.782 Replay Protected Memory Block: Not Supported 00:17:19.782 00:17:19.782 Firmware Slot Information 00:17:19.782 ========================= 00:17:19.782 Active slot: 1 00:17:19.782 Slot 1 Firmware Revision: 24.09 00:17:19.782 00:17:19.782 00:17:19.782 Commands Supported and Effects 00:17:19.782 ============================== 00:17:19.782 Admin Commands 00:17:19.782 -------------- 00:17:19.782 Get Log Page (02h): Supported 00:17:19.782 Identify (06h): Supported 00:17:19.782 Abort (08h): Supported 00:17:19.783 Set Features (09h): Supported 00:17:19.783 Get Features (0Ah): Supported 00:17:19.783 Asynchronous Event Request (0Ch): Supported 00:17:19.783 Keep Alive (18h): Supported 00:17:19.783 I/O Commands 00:17:19.783 ------------ 00:17:19.783 Flush (00h): Supported LBA-Change 00:17:19.783 Write (01h): Supported LBA-Change 00:17:19.783 Read (02h): Supported 00:17:19.783 Compare (05h): Supported 00:17:19.783 Write Zeroes (08h): Supported LBA-Change 00:17:19.783 Dataset Management (09h): Supported LBA-Change 00:17:19.783 Copy (19h): Supported LBA-Change 00:17:19.783 00:17:19.783 Error Log 00:17:19.783 ========= 00:17:19.783 00:17:19.783 Arbitration 00:17:19.783 =========== 00:17:19.783 Arbitration Burst: 1 00:17:19.783 00:17:19.783 Power Management 00:17:19.783 ================ 00:17:19.783 Number of Power States: 1 00:17:19.783 Current Power State: Power State #0 00:17:19.783 Power State #0: 00:17:19.783 Max Power: 0.00 W 00:17:19.783 Non-Operational State: Operational 00:17:19.783 Entry Latency: Not Reported 00:17:19.783 Exit Latency: Not Reported 00:17:19.783 Relative Read Throughput: 0 00:17:19.783 Relative Read Latency: 0 00:17:19.783 Relative Write Throughput: 0 00:17:19.783 Relative Write Latency: 0 00:17:19.783 Idle Power: Not Reported 00:17:19.783 Active Power: Not Reported 00:17:19.783 Non-Operational Permissive Mode: Not Supported 00:17:19.783 00:17:19.783 Health Information 00:17:19.783 ================== 00:17:19.783 Critical Warnings: 00:17:19.783 Available Spare Space: OK 00:17:19.783 Temperature: OK 00:17:19.783 Device Reliability: OK 00:17:19.783 Read Only: No 00:17:19.783 Volatile Memory Backup: OK 00:17:19.783 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:19.783 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:19.783 Available Spare: 0% 00:17:19.783 Available Spare Threshold: 0% 00:17:19.783 Life Percentage Used:[2024-07-15 22:12:06.515615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.515625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x20c1a60) 00:17:19.783 [2024-07-15 22:12:06.515637] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.783 [2024-07-15 22:12:06.515675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21052c0, cid 7, qid 0 00:17:19.783 [2024-07-15 22:12:06.515750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.783 [2024-07-15 22:12:06.515769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.783 [2024-07-15 22:12:06.515775] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.515781] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21052c0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.515831] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:19.783 [2024-07-15 22:12:06.515846] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104840) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.515855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.783 [2024-07-15 22:12:06.515862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21049c0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.515869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.783 [2024-07-15 22:12:06.515875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104b40) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.515882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.783 [2024-07-15 22:12:06.515888] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.515895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.783 [2024-07-15 22:12:06.515907] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.515914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.515919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.783 [2024-07-15 22:12:06.515930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.783 [2024-07-15 22:12:06.515961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.783 [2024-07-15 22:12:06.516017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.783 [2024-07-15 22:12:06.516025] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.783 [2024-07-15 22:12:06.516030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.516047] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516052] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.783 [2024-07-15 22:12:06.516081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.783 [2024-07-15 22:12:06.516140] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.783 [2024-07-15 22:12:06.516220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.783 [2024-07-15 22:12:06.516230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.783 [2024-07-15 22:12:06.516235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.516248] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:19.783 [2024-07-15 22:12:06.516254] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:19.783 [2024-07-15 22:12:06.516268] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.783 [2024-07-15 22:12:06.516289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.783 [2024-07-15 22:12:06.516314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.783 [2024-07-15 22:12:06.516369] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.783 [2024-07-15 22:12:06.516377] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.783 [2024-07-15 22:12:06.516382] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.516402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516408] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516413] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.783 [2024-07-15 22:12:06.516422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.783 [2024-07-15 22:12:06.516444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.783 [2024-07-15 22:12:06.516501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.783 [2024-07-15 22:12:06.516509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.783 [2024-07-15 22:12:06.516514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.516533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.783 [2024-07-15 22:12:06.516553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.783 [2024-07-15 22:12:06.516575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.783 [2024-07-15 22:12:06.516631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.783 [2024-07-15 22:12:06.516640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.783 [2024-07-15 22:12:06.516645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516650] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.516663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.783 [2024-07-15 22:12:06.516684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.783 [2024-07-15 22:12:06.516705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.783 [2024-07-15 22:12:06.516759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.783 [2024-07-15 22:12:06.516768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.783 [2024-07-15 22:12:06.516773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.516791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516803] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.783 [2024-07-15 22:12:06.516812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.783 [2024-07-15 22:12:06.516833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.783 [2024-07-15 22:12:06.516890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.783 [2024-07-15 22:12:06.516898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.783 [2024-07-15 22:12:06.516903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.516922] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.516933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.783 [2024-07-15 22:12:06.516942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.783 [2024-07-15 22:12:06.516964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.783 [2024-07-15 22:12:06.517020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.783 [2024-07-15 22:12:06.517028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.783 [2024-07-15 22:12:06.517033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517039] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.517052] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517058] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517063] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.783 [2024-07-15 22:12:06.517072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.783 [2024-07-15 22:12:06.517110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.783 [2024-07-15 22:12:06.517165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.783 [2024-07-15 22:12:06.517174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.783 [2024-07-15 22:12:06.517179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517184] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.517198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517205] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517210] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.783 [2024-07-15 22:12:06.517219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.783 [2024-07-15 22:12:06.517242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.783 [2024-07-15 22:12:06.517297] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.783 [2024-07-15 22:12:06.517305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.783 [2024-07-15 22:12:06.517310] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517315] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.517329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.783 [2024-07-15 22:12:06.517349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.783 [2024-07-15 22:12:06.517371] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.783 [2024-07-15 22:12:06.517424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.783 [2024-07-15 22:12:06.517432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.783 [2024-07-15 22:12:06.517437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.517456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.783 [2024-07-15 22:12:06.517476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.783 [2024-07-15 22:12:06.517498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.783 [2024-07-15 22:12:06.517558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.783 [2024-07-15 22:12:06.517567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.783 [2024-07-15 22:12:06.517572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.783 [2024-07-15 22:12:06.517590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.783 [2024-07-15 22:12:06.517602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.783 [2024-07-15 22:12:06.517611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.783 [2024-07-15 22:12:06.517632] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.783 [2024-07-15 22:12:06.517685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.783 [2024-07-15 22:12:06.517694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.783 [2024-07-15 22:12:06.517699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.517704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.784 [2024-07-15 22:12:06.517717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.517724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.517729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.784 [2024-07-15 22:12:06.517738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.784 [2024-07-15 22:12:06.517759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.784 [2024-07-15 22:12:06.517813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.784 [2024-07-15 22:12:06.517821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.784 [2024-07-15 22:12:06.517826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.517831] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.784 [2024-07-15 22:12:06.517845] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.517851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.517856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.784 [2024-07-15 22:12:06.517865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.784 [2024-07-15 22:12:06.517886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.784 [2024-07-15 22:12:06.517940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.784 [2024-07-15 22:12:06.517949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.784 [2024-07-15 22:12:06.517954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.517959] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.784 [2024-07-15 22:12:06.517972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.517979] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.517984] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.784 [2024-07-15 22:12:06.517993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.784 [2024-07-15 22:12:06.518014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.784 [2024-07-15 22:12:06.518068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.784 [2024-07-15 22:12:06.518077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.784 [2024-07-15 22:12:06.518094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.784 [2024-07-15 22:12:06.518115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518121] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.784 [2024-07-15 22:12:06.518136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.784 [2024-07-15 22:12:06.518161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.784 [2024-07-15 22:12:06.518215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.784 [2024-07-15 22:12:06.518224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.784 [2024-07-15 22:12:06.518229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.784 [2024-07-15 22:12:06.518248] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518255] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.784 [2024-07-15 22:12:06.518269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.784 [2024-07-15 22:12:06.518291] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.784 [2024-07-15 22:12:06.518351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.784 [2024-07-15 22:12:06.518364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.784 [2024-07-15 22:12:06.518369] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.784 [2024-07-15 22:12:06.518388] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.784 [2024-07-15 22:12:06.518409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.784 [2024-07-15 22:12:06.518433] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.784 [2024-07-15 22:12:06.518490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.784 [2024-07-15 22:12:06.518507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.784 [2024-07-15 22:12:06.518513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518519] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.784 [2024-07-15 22:12:06.518533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.784 [2024-07-15 22:12:06.518554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.784 [2024-07-15 22:12:06.518577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.784 [2024-07-15 22:12:06.518635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.784 [2024-07-15 22:12:06.518649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.784 [2024-07-15 22:12:06.518654] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.784 [2024-07-15 22:12:06.518673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.784 [2024-07-15 22:12:06.518694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.784 [2024-07-15 22:12:06.518718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.784 [2024-07-15 22:12:06.518774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.784 [2024-07-15 22:12:06.518788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.784 [2024-07-15 22:12:06.518793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.784 [2024-07-15 22:12:06.518813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.784 [2024-07-15 22:12:06.518834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.784 [2024-07-15 22:12:06.518856] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.784 [2024-07-15 22:12:06.518913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.784 [2024-07-15 22:12:06.518927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.784 [2024-07-15 22:12:06.518932] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.784 [2024-07-15 22:12:06.518951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.518963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.784 [2024-07-15 22:12:06.518972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.784 [2024-07-15 22:12:06.518995] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.784 [2024-07-15 22:12:06.519048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.784 [2024-07-15 22:12:06.519062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.784 [2024-07-15 22:12:06.519067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.519073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.784 [2024-07-15 22:12:06.523105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.523134] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.523141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20c1a60) 00:17:19.784 [2024-07-15 22:12:06.523152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:19.784 [2024-07-15 22:12:06.523186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2104cc0, cid 3, qid 0 00:17:19.784 [2024-07-15 22:12:06.523246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:19.784 [2024-07-15 22:12:06.523256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:19.784 [2024-07-15 22:12:06.523261] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:19.784 [2024-07-15 22:12:06.523266] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2104cc0) on tqpair=0x20c1a60 00:17:19.784 [2024-07-15 22:12:06.523278] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:17:19.784 0% 00:17:19.784 Data Units Read: 0 00:17:19.784 Data Units Written: 0 00:17:19.784 Host Read Commands: 0 00:17:19.784 Host Write Commands: 0 00:17:19.784 Controller Busy Time: 0 minutes 00:17:19.784 Power Cycles: 0 00:17:19.784 Power On Hours: 0 hours 00:17:19.784 Unsafe Shutdowns: 0 00:17:19.784 Unrecoverable Media Errors: 0 00:17:19.784 Lifetime Error Log Entries: 0 00:17:19.784 Warning Temperature Time: 0 minutes 00:17:19.784 Critical Temperature Time: 0 minutes 00:17:19.784 00:17:19.784 Number of Queues 00:17:19.784 ================ 00:17:19.784 Number of I/O Submission Queues: 127 00:17:19.784 Number of I/O Completion Queues: 127 00:17:19.784 00:17:19.784 Active Namespaces 00:17:19.784 ================= 00:17:19.784 Namespace ID:1 00:17:19.784 Error Recovery Timeout: Unlimited 00:17:19.784 Command Set Identifier: NVM (00h) 00:17:19.784 Deallocate: Supported 00:17:19.784 Deallocated/Unwritten Error: Not Supported 00:17:19.784 Deallocated Read Value: Unknown 00:17:19.784 Deallocate in Write Zeroes: Not Supported 00:17:19.784 Deallocated Guard Field: 0xFFFF 00:17:19.784 Flush: Supported 00:17:19.784 Reservation: Supported 00:17:19.784 Namespace Sharing Capabilities: Multiple Controllers 00:17:19.784 Size (in LBAs): 131072 (0GiB) 00:17:19.784 Capacity (in LBAs): 131072 (0GiB) 00:17:19.784 Utilization (in LBAs): 131072 (0GiB) 00:17:19.784 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:19.784 EUI64: ABCDEF0123456789 00:17:19.784 UUID: 3f732880-1c77-43a0-97ef-0dbb6d4c4bf9 00:17:19.784 Thin Provisioning: Not Supported 00:17:19.784 Per-NS Atomic Units: Yes 00:17:19.784 Atomic Boundary Size (Normal): 0 00:17:19.784 Atomic Boundary Size (PFail): 0 00:17:19.784 Atomic Boundary Offset: 0 00:17:19.784 Maximum Single Source Range Length: 65535 00:17:19.784 Maximum Copy Length: 65535 00:17:19.784 Maximum Source Range Count: 1 00:17:19.784 NGUID/EUI64 Never Reused: No 00:17:19.784 Namespace Write Protected: No 00:17:19.784 Number of LBA Formats: 1 00:17:19.784 Current LBA Format: LBA Format #00 00:17:19.784 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:19.784 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.784 rmmod nvme_tcp 00:17:19.784 rmmod nvme_fabrics 00:17:19.784 rmmod nvme_keyring 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86705 ']' 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86705 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 86705 ']' 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 86705 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86705 00:17:19.784 killing process with pid 86705 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86705' 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 86705 00:17:19.784 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 86705 00:17:20.042 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:20.042 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:20.042 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:20.042 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.042 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.042 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.042 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.042 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.042 22:12:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:20.042 00:17:20.042 real 0m1.840s 00:17:20.042 user 0m4.273s 00:17:20.042 sys 0m0.566s 00:17:20.042 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:20.042 22:12:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:20.042 ************************************ 00:17:20.042 END TEST nvmf_identify 00:17:20.042 ************************************ 00:17:20.042 22:12:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:20.042 22:12:06 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:20.042 22:12:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:20.042 22:12:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:20.042 22:12:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:20.042 ************************************ 00:17:20.042 START TEST nvmf_perf 00:17:20.042 ************************************ 00:17:20.042 22:12:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:20.300 * Looking for test storage... 00:17:20.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:20.300 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:20.301 Cannot find device "nvmf_tgt_br" 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.301 Cannot find device "nvmf_tgt_br2" 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:20.301 Cannot find device "nvmf_tgt_br" 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:20.301 Cannot find device "nvmf_tgt_br2" 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:20.301 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:20.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:17:20.559 00:17:20.559 --- 10.0.0.2 ping statistics --- 00:17:20.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.559 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:20.559 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.559 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:17:20.559 00:17:20.559 --- 10.0.0.3 ping statistics --- 00:17:20.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.559 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:20.559 00:17:20.559 --- 10.0.0.1 ping statistics --- 00:17:20.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.559 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=86913 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 86913 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 86913 ']' 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.559 22:12:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:20.559 [2024-07-15 22:12:07.437298] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:17:20.559 [2024-07-15 22:12:07.437434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.819 [2024-07-15 22:12:07.570517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.819 [2024-07-15 22:12:07.633920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.819 [2024-07-15 22:12:07.633980] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.819 [2024-07-15 22:12:07.633992] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.819 [2024-07-15 22:12:07.634001] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.819 [2024-07-15 22:12:07.634008] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.819 [2024-07-15 22:12:07.634113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.819 [2024-07-15 22:12:07.634676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.819 [2024-07-15 22:12:07.634748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.819 [2024-07-15 22:12:07.634755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.819 22:12:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.819 22:12:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:17:20.819 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:20.819 22:12:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:20.819 22:12:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:20.819 22:12:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.819 22:12:07 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:20.819 22:12:07 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:21.386 22:12:08 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:21.386 22:12:08 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:21.645 22:12:08 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:21.645 22:12:08 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:21.904 22:12:08 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:21.904 22:12:08 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:21.904 22:12:08 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:21.904 22:12:08 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:21.904 22:12:08 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:22.162 [2024-07-15 22:12:08.988891] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.162 22:12:09 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:22.438 22:12:09 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:22.438 22:12:09 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:22.696 22:12:09 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:22.696 22:12:09 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:22.954 22:12:09 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.212 [2024-07-15 22:12:09.978131] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.212 22:12:09 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:23.471 22:12:10 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:23.471 22:12:10 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:23.471 22:12:10 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:23.471 22:12:10 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:24.472 Initializing NVMe Controllers 00:17:24.472 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:24.472 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:24.472 Initialization complete. Launching workers. 00:17:24.472 ======================================================== 00:17:24.472 Latency(us) 00:17:24.472 Device Information : IOPS MiB/s Average min max 00:17:24.472 PCIE (0000:00:10.0) NSID 1 from core 0: 25312.00 98.88 1264.08 302.13 6841.24 00:17:24.472 ======================================================== 00:17:24.472 Total : 25312.00 98.88 1264.08 302.13 6841.24 00:17:24.472 00:17:24.731 22:12:11 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:26.106 Initializing NVMe Controllers 00:17:26.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:26.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:26.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:26.106 Initialization complete. Launching workers. 00:17:26.106 ======================================================== 00:17:26.106 Latency(us) 00:17:26.106 Device Information : IOPS MiB/s Average min max 00:17:26.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3327.90 13.00 298.89 117.57 4366.64 00:17:26.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8194.56 5996.44 12027.91 00:17:26.106 ======================================================== 00:17:26.106 Total : 3450.89 13.48 580.30 117.57 12027.91 00:17:26.106 00:17:26.106 22:12:12 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:27.481 Initializing NVMe Controllers 00:17:27.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:27.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:27.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:27.481 Initialization complete. Launching workers. 00:17:27.481 ======================================================== 00:17:27.481 Latency(us) 00:17:27.481 Device Information : IOPS MiB/s Average min max 00:17:27.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8392.60 32.78 3813.22 872.49 9876.59 00:17:27.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2687.90 10.50 12028.03 6715.92 23363.94 00:17:27.481 ======================================================== 00:17:27.481 Total : 11080.50 43.28 5805.97 872.49 23363.94 00:17:27.481 00:17:27.481 22:12:14 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:27.481 22:12:14 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:30.013 Initializing NVMe Controllers 00:17:30.013 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:30.013 Controller IO queue size 128, less than required. 00:17:30.013 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.013 Controller IO queue size 128, less than required. 00:17:30.013 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:30.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:30.013 Initialization complete. Launching workers. 00:17:30.013 ======================================================== 00:17:30.013 Latency(us) 00:17:30.013 Device Information : IOPS MiB/s Average min max 00:17:30.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1679.19 419.80 77184.23 46723.07 138363.51 00:17:30.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 446.15 111.54 318323.31 99298.02 709185.41 00:17:30.013 ======================================================== 00:17:30.013 Total : 2125.34 531.33 127804.16 46723.07 709185.41 00:17:30.013 00:17:30.013 22:12:16 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:30.273 Initializing NVMe Controllers 00:17:30.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:30.273 Controller IO queue size 128, less than required. 00:17:30.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.273 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:30.273 Controller IO queue size 128, less than required. 00:17:30.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.273 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:30.273 WARNING: Some requested NVMe devices were skipped 00:17:30.273 No valid NVMe controllers or AIO or URING devices found 00:17:30.273 22:12:17 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:32.804 Initializing NVMe Controllers 00:17:32.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:32.804 Controller IO queue size 128, less than required. 00:17:32.804 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:32.804 Controller IO queue size 128, less than required. 00:17:32.804 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:32.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:32.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:32.804 Initialization complete. Launching workers. 00:17:32.804 00:17:32.804 ==================== 00:17:32.804 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:32.804 TCP transport: 00:17:32.804 polls: 9454 00:17:32.804 idle_polls: 5258 00:17:32.804 sock_completions: 4196 00:17:32.804 nvme_completions: 4363 00:17:32.804 submitted_requests: 6458 00:17:32.804 queued_requests: 1 00:17:32.804 00:17:32.804 ==================== 00:17:32.804 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:32.804 TCP transport: 00:17:32.804 polls: 10841 00:17:32.804 idle_polls: 7195 00:17:32.804 sock_completions: 3646 00:17:32.804 nvme_completions: 6843 00:17:32.804 submitted_requests: 10312 00:17:32.804 queued_requests: 1 00:17:32.804 ======================================================== 00:17:32.804 Latency(us) 00:17:32.804 Device Information : IOPS MiB/s Average min max 00:17:32.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1090.23 272.56 120327.85 73430.05 192292.96 00:17:32.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1710.07 427.52 75770.68 29191.02 130453.80 00:17:32.804 ======================================================== 00:17:32.804 Total : 2800.30 700.07 93117.91 29191.02 192292.96 00:17:32.804 00:17:32.804 22:12:19 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:32.804 22:12:19 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:33.062 rmmod nvme_tcp 00:17:33.062 rmmod nvme_fabrics 00:17:33.062 rmmod nvme_keyring 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 86913 ']' 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 86913 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 86913 ']' 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 86913 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86913 00:17:33.062 killing process with pid 86913 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86913' 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 86913 00:17:33.062 22:12:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 86913 00:17:33.997 22:12:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:33.997 22:12:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:33.997 22:12:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:33.998 22:12:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.998 22:12:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:33.998 22:12:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.998 22:12:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.998 22:12:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.998 22:12:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:33.998 ************************************ 00:17:33.998 END TEST nvmf_perf 00:17:33.998 ************************************ 00:17:33.998 00:17:33.998 real 0m13.676s 00:17:33.998 user 0m49.985s 00:17:33.998 sys 0m3.499s 00:17:33.998 22:12:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:33.998 22:12:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:33.998 22:12:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:33.998 22:12:20 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:33.998 22:12:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:33.998 22:12:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.998 22:12:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:33.998 ************************************ 00:17:33.998 START TEST nvmf_fio_host 00:17:33.998 ************************************ 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:33.998 * Looking for test storage... 00:17:33.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:33.998 Cannot find device "nvmf_tgt_br" 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:33.998 Cannot find device "nvmf_tgt_br2" 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:17:33.998 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:33.999 Cannot find device "nvmf_tgt_br" 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:33.999 Cannot find device "nvmf_tgt_br2" 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:33.999 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:33.999 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:33.999 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:34.257 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:34.257 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:34.257 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:34.257 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:34.257 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:34.257 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:34.257 22:12:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:34.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:17:34.257 00:17:34.257 --- 10.0.0.2 ping statistics --- 00:17:34.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.257 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:34.257 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:34.257 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:17:34.257 00:17:34.257 --- 10.0.0.3 ping statistics --- 00:17:34.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.257 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:34.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:34.257 00:17:34.257 --- 10.0.0.1 ping statistics --- 00:17:34.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.257 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87371 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87371 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 87371 ']' 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.257 22:12:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.257 [2024-07-15 22:12:21.201180] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:17:34.257 [2024-07-15 22:12:21.201295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.515 [2024-07-15 22:12:21.372816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.515 [2024-07-15 22:12:21.459875] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.515 [2024-07-15 22:12:21.459937] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.515 [2024-07-15 22:12:21.459953] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.515 [2024-07-15 22:12:21.459966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.515 [2024-07-15 22:12:21.459977] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.515 [2024-07-15 22:12:21.460070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.515 [2024-07-15 22:12:21.460362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.515 [2024-07-15 22:12:21.460735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.515 [2024-07-15 22:12:21.460750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.536 22:12:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.537 22:12:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:17:35.537 22:12:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:35.537 [2024-07-15 22:12:22.426692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.537 22:12:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:35.537 22:12:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:35.537 22:12:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.795 22:12:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:36.054 Malloc1 00:17:36.054 22:12:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.313 22:12:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:36.570 22:12:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.828 [2024-07-15 22:12:23.655610] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.828 22:12:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:37.087 22:12:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:37.345 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:37.345 fio-3.35 00:17:37.345 Starting 1 thread 00:17:39.881 00:17:39.881 test: (groupid=0, jobs=1): err= 0: pid=87502: Mon Jul 15 22:12:26 2024 00:17:39.881 read: IOPS=8651, BW=33.8MiB/s (35.4MB/s)(67.8MiB/2007msec) 00:17:39.881 slat (usec): min=2, max=418, avg= 3.10, stdev= 4.19 00:17:39.881 clat (usec): min=3592, max=14155, avg=7760.77, stdev=885.80 00:17:39.881 lat (usec): min=3637, max=14158, avg=7763.87, stdev=885.68 00:17:39.881 clat percentiles (usec): 00:17:39.881 | 1.00th=[ 5932], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7111], 00:17:39.881 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7767], 00:17:39.881 | 70.00th=[ 7963], 80.00th=[ 8225], 90.00th=[ 8979], 95.00th=[ 9634], 00:17:39.881 | 99.00th=[10421], 99.50th=[10683], 99.90th=[12256], 99.95th=[12649], 00:17:39.881 | 99.99th=[14091] 00:17:39.881 bw ( KiB/s): min=33552, max=35096, per=99.93%, avg=34580.00, stdev=697.50, samples=4 00:17:39.881 iops : min= 8388, max= 8774, avg=8645.00, stdev=174.38, samples=4 00:17:39.881 write: IOPS=8640, BW=33.8MiB/s (35.4MB/s)(67.7MiB/2007msec); 0 zone resets 00:17:39.881 slat (usec): min=2, max=361, avg= 3.24, stdev= 3.10 00:17:39.881 clat (usec): min=2712, max=13323, avg=6981.49, stdev=780.82 00:17:39.881 lat (usec): min=2726, max=13326, avg=6984.73, stdev=780.71 00:17:39.881 clat percentiles (usec): 00:17:39.881 | 1.00th=[ 5276], 5.00th=[ 5997], 10.00th=[ 6259], 20.00th=[ 6456], 00:17:39.881 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:17:39.881 | 70.00th=[ 7111], 80.00th=[ 7373], 90.00th=[ 8029], 95.00th=[ 8586], 00:17:39.881 | 99.00th=[ 9372], 99.50th=[ 9503], 99.90th=[10683], 99.95th=[12387], 00:17:39.881 | 99.99th=[13173] 00:17:39.881 bw ( KiB/s): min=33664, max=35520, per=100.00%, avg=34576.00, stdev=820.22, samples=4 00:17:39.881 iops : min= 8416, max= 8880, avg=8644.00, stdev=205.06, samples=4 00:17:39.881 lat (msec) : 4=0.07%, 10=98.57%, 20=1.35% 00:17:39.881 cpu : usr=58.77%, sys=28.32%, ctx=12, majf=0, minf=7 00:17:39.881 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:39.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:39.881 issued rwts: total=17363,17342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.881 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:39.881 00:17:39.881 Run status group 0 (all jobs): 00:17:39.881 READ: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=67.8MiB (71.1MB), run=2007-2007msec 00:17:39.881 WRITE: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=67.7MiB (71.0MB), run=2007-2007msec 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:39.881 22:12:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:39.881 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:39.881 fio-3.35 00:17:39.881 Starting 1 thread 00:17:42.418 00:17:42.418 test: (groupid=0, jobs=1): err= 0: pid=87551: Mon Jul 15 22:12:28 2024 00:17:42.418 read: IOPS=7809, BW=122MiB/s (128MB/s)(245MiB/2008msec) 00:17:42.418 slat (usec): min=3, max=122, avg= 3.95, stdev= 1.91 00:17:42.418 clat (usec): min=2432, max=21852, avg=9723.50, stdev=2428.28 00:17:42.418 lat (usec): min=2435, max=21857, avg=9727.45, stdev=2428.37 00:17:42.418 clat percentiles (usec): 00:17:42.418 | 1.00th=[ 4948], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 7439], 00:17:42.418 | 30.00th=[ 8160], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[10552], 00:17:42.418 | 70.00th=[11207], 80.00th=[11731], 90.00th=[12387], 95.00th=[13435], 00:17:42.418 | 99.00th=[16057], 99.50th=[17433], 99.90th=[20841], 99.95th=[21627], 00:17:42.418 | 99.99th=[21890] 00:17:42.418 bw ( KiB/s): min=56576, max=69792, per=50.69%, avg=63344.00, stdev=7032.65, samples=4 00:17:42.418 iops : min= 3536, max= 4362, avg=3959.00, stdev=439.54, samples=4 00:17:42.418 write: IOPS=4699, BW=73.4MiB/s (77.0MB/s)(130MiB/1769msec); 0 zone resets 00:17:42.418 slat (usec): min=37, max=313, avg=39.82, stdev= 6.37 00:17:42.418 clat (usec): min=3392, max=24081, avg=11780.74, stdev=2461.07 00:17:42.418 lat (usec): min=3431, max=24119, avg=11820.56, stdev=2461.44 00:17:42.418 clat percentiles (usec): 00:17:42.418 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:17:42.418 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11469], 60.00th=[11994], 00:17:42.418 | 70.00th=[12649], 80.00th=[13698], 90.00th=[15008], 95.00th=[16450], 00:17:42.418 | 99.00th=[19006], 99.50th=[20317], 99.90th=[22938], 99.95th=[23200], 00:17:42.418 | 99.99th=[23987] 00:17:42.418 bw ( KiB/s): min=59232, max=72608, per=87.93%, avg=66112.00, stdev=7244.40, samples=4 00:17:42.418 iops : min= 3702, max= 4538, avg=4132.00, stdev=452.78, samples=4 00:17:42.418 lat (msec) : 4=0.18%, 10=43.05%, 20=56.48%, 50=0.28% 00:17:42.418 cpu : usr=72.35%, sys=17.89%, ctx=6, majf=0, minf=22 00:17:42.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:42.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:42.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:42.418 issued rwts: total=15682,8313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:42.418 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:42.418 00:17:42.418 Run status group 0 (all jobs): 00:17:42.418 READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=245MiB (257MB), run=2008-2008msec 00:17:42.418 WRITE: bw=73.4MiB/s (77.0MB/s), 73.4MiB/s-73.4MiB/s (77.0MB/s-77.0MB/s), io=130MiB (136MB), run=1769-1769msec 00:17:42.418 22:12:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:42.418 rmmod nvme_tcp 00:17:42.418 rmmod nvme_fabrics 00:17:42.418 rmmod nvme_keyring 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 87371 ']' 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 87371 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 87371 ']' 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 87371 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87371 00:17:42.418 killing process with pid 87371 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87371' 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 87371 00:17:42.418 22:12:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 87371 00:17:42.677 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.677 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.677 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.677 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.677 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.677 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.677 22:12:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.677 22:12:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.677 22:12:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:42.677 00:17:42.677 real 0m8.775s 00:17:42.677 user 0m35.945s 00:17:42.677 sys 0m2.258s 00:17:42.677 22:12:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:42.677 22:12:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.677 ************************************ 00:17:42.677 END TEST nvmf_fio_host 00:17:42.677 ************************************ 00:17:42.677 22:12:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:42.677 22:12:29 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:42.677 22:12:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:42.677 22:12:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:42.677 22:12:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:42.677 ************************************ 00:17:42.677 START TEST nvmf_failover 00:17:42.677 ************************************ 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:42.677 * Looking for test storage... 00:17:42.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:42.677 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:42.934 Cannot find device "nvmf_tgt_br" 00:17:42.934 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:17:42.934 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.934 Cannot find device "nvmf_tgt_br2" 00:17:42.934 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:17:42.934 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:42.934 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:42.934 Cannot find device "nvmf_tgt_br" 00:17:42.934 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:17:42.934 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:42.934 Cannot find device "nvmf_tgt_br2" 00:17:42.934 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:42.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:42.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:42.935 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:43.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:17:43.193 00:17:43.193 --- 10.0.0.2 ping statistics --- 00:17:43.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.193 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:43.193 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:43.193 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:17:43.193 00:17:43.193 --- 10.0.0.3 ping statistics --- 00:17:43.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.193 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:43.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:43.193 00:17:43.193 --- 10.0.0.1 ping statistics --- 00:17:43.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.193 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=87760 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 87760 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87760 ']' 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.193 22:12:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:43.193 [2024-07-15 22:12:30.005688] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:17:43.193 [2024-07-15 22:12:30.005781] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.451 [2024-07-15 22:12:30.146296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:43.451 [2024-07-15 22:12:30.206206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.451 [2024-07-15 22:12:30.206260] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.451 [2024-07-15 22:12:30.206272] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.451 [2024-07-15 22:12:30.206280] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.451 [2024-07-15 22:12:30.206287] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.451 [2024-07-15 22:12:30.207012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.451 [2024-07-15 22:12:30.207229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.451 [2024-07-15 22:12:30.207228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.383 22:12:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.383 22:12:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:44.383 22:12:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:44.383 22:12:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:44.383 22:12:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:44.383 22:12:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.383 22:12:31 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:44.383 [2024-07-15 22:12:31.273982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.383 22:12:31 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:44.955 Malloc0 00:17:44.955 22:12:31 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:45.212 22:12:31 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:45.470 22:12:32 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.728 [2024-07-15 22:12:32.474260] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.728 22:12:32 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:45.986 [2024-07-15 22:12:32.718410] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:45.986 22:12:32 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:46.245 [2024-07-15 22:12:32.958628] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:46.245 22:12:32 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87877 00:17:46.245 22:12:32 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:46.245 22:12:32 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:46.245 22:12:32 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87877 /var/tmp/bdevperf.sock 00:17:46.245 22:12:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87877 ']' 00:17:46.245 22:12:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:46.245 22:12:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.245 22:12:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:46.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:46.245 22:12:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.245 22:12:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:46.503 22:12:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.503 22:12:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:46.503 22:12:33 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:46.761 NVMe0n1 00:17:47.019 22:12:33 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:47.276 00:17:47.276 22:12:34 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87911 00:17:47.276 22:12:34 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:47.276 22:12:34 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:48.208 22:12:35 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:48.467 [2024-07-15 22:12:35.377323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.377894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.377991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.378057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.378149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.378225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.378288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.378357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.378445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.378521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.378601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.378670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.378719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.378782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.378841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.378920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.378986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.379055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.379132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.379208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.379267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.379329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.379388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.379445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.379517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.379596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.379668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.379726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.379796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.379876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.379939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380368] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.380999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.467 [2024-07-15 22:12:35.381966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af0f80 is same with the state(5) to be set 00:17:48.724 22:12:35 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:52.004 22:12:38 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:52.004 00:17:52.004 22:12:38 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:52.263 [2024-07-15 22:12:38.966029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.263 [2024-07-15 22:12:38.966252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966316] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966452] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966556] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.966997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.264 [2024-07-15 22:12:38.967005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 [2024-07-15 22:12:38.967013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 [2024-07-15 22:12:38.967021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 [2024-07-15 22:12:38.967029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 [2024-07-15 22:12:38.967037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 [2024-07-15 22:12:38.967045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 [2024-07-15 22:12:38.967053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 [2024-07-15 22:12:38.967061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 [2024-07-15 22:12:38.967069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 [2024-07-15 22:12:38.967078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 [2024-07-15 22:12:38.967099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 [2024-07-15 22:12:38.967107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 [2024-07-15 22:12:38.967115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 [2024-07-15 22:12:38.967123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 [2024-07-15 22:12:38.967131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af1e10 is same with the state(5) to be set 00:17:52.265 22:12:38 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:55.567 22:12:41 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.567 [2024-07-15 22:12:42.264219] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.567 22:12:42 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:17:56.511 22:12:43 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:56.769 [2024-07-15 22:12:43.614762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.769 [2024-07-15 22:12:43.614975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.614983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.614991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.614999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.770 [2024-07-15 22:12:43.615718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 [2024-07-15 22:12:43.615847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af29a0 is same with the state(5) to be set 00:17:56.771 22:12:43 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 87911 00:18:03.336 0 00:18:03.336 22:12:49 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 87877 00:18:03.336 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87877 ']' 00:18:03.336 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87877 00:18:03.336 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:03.336 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.336 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87877 00:18:03.336 killing process with pid 87877 00:18:03.336 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:03.336 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:03.336 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87877' 00:18:03.336 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87877 00:18:03.336 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87877 00:18:03.336 22:12:49 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:03.336 [2024-07-15 22:12:33.024874] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:18:03.336 [2024-07-15 22:12:33.024986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87877 ] 00:18:03.336 [2024-07-15 22:12:33.163597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.336 [2024-07-15 22:12:33.234005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.336 Running I/O for 15 seconds... 00:18:03.336 [2024-07-15 22:12:35.382287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.336 [2024-07-15 22:12:35.382343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.336 [2024-07-15 22:12:35.382392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.336 [2024-07-15 22:12:35.382423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.336 [2024-07-15 22:12:35.382452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.336 [2024-07-15 22:12:35.382481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.336 [2024-07-15 22:12:35.382510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.336 [2024-07-15 22:12:35.382539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.336 [2024-07-15 22:12:35.382567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.336 [2024-07-15 22:12:35.382596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.336 [2024-07-15 22:12:35.382625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.336 [2024-07-15 22:12:35.382653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.336 [2024-07-15 22:12:35.382715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.336 [2024-07-15 22:12:35.382744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.336 [2024-07-15 22:12:35.382773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.336 [2024-07-15 22:12:35.382802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.336 [2024-07-15 22:12:35.382831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.336 [2024-07-15 22:12:35.382860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.336 [2024-07-15 22:12:35.382875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.382889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.382904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.382919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.382934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.382947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.382963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.382976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.382991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.383983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.383998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.384014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.384028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.384045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.384058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.384074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.384113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.384133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.384147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.384162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.384176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.337 [2024-07-15 22:12:35.384191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.337 [2024-07-15 22:12:35.384204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.338 [2024-07-15 22:12:35.384234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.338 [2024-07-15 22:12:35.384271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.338 [2024-07-15 22:12:35.384301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.338 [2024-07-15 22:12:35.384330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.384977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.384993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.338 [2024-07-15 22:12:35.385507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.338 [2024-07-15 22:12:35.385521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.385980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.385994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.386011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.386025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.386040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.386054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.386070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.386096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.386115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.386128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.386144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.386157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.386173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.386194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.386211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.386224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.386239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.339 [2024-07-15 22:12:35.386254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.386268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7fc90 is same with the state(5) to be set 00:18:03.339 [2024-07-15 22:12:35.386289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:03.339 [2024-07-15 22:12:35.386299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:03.339 [2024-07-15 22:12:35.386310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82544 len:8 PRP1 0x0 PRP2 0x0 00:18:03.339 [2024-07-15 22:12:35.386323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.386386] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b7fc90 was disconnected and freed. reset controller. 00:18:03.339 [2024-07-15 22:12:35.386406] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:03.339 [2024-07-15 22:12:35.386484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.339 [2024-07-15 22:12:35.386506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.386522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.339 [2024-07-15 22:12:35.386535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.386549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.339 [2024-07-15 22:12:35.386562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.386579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.339 [2024-07-15 22:12:35.386593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:35.386606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:03.339 [2024-07-15 22:12:35.390625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:03.339 [2024-07-15 22:12:35.390699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b03e30 (9): Bad file descriptor 00:18:03.339 [2024-07-15 22:12:35.432129] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:03.339 [2024-07-15 22:12:38.967238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.339 [2024-07-15 22:12:38.967286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:38.967313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.339 [2024-07-15 22:12:38.967352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:38.967369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.339 [2024-07-15 22:12:38.967383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:38.967398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.339 [2024-07-15 22:12:38.967411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:38.967426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.339 [2024-07-15 22:12:38.967439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:38.967455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.339 [2024-07-15 22:12:38.967468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:38.967483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.339 [2024-07-15 22:12:38.967496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:38.967511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.339 [2024-07-15 22:12:38.967524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:38.967539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.339 [2024-07-15 22:12:38.967552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.339 [2024-07-15 22:12:38.967567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.339 [2024-07-15 22:12:38.967580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.967608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:68232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.967636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.967664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.967692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.967728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.967757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.967785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.967814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.967842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.967870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.967898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.967926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.967954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.967983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.967998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.340 [2024-07-15 22:12:38.968646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.340 [2024-07-15 22:12:38.968659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.968674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.968687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.968702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.968715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.968729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.968743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.968757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.968772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.968787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.968800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.968815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.968828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.968843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.968855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.968876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.968889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.968904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.968917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.968932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.968945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.968960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.968974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.968988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.341 [2024-07-15 22:12:38.969468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.341 [2024-07-15 22:12:38.969908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.341 [2024-07-15 22:12:38.969921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.969935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.969948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.969963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.969976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.969996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.970976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.970991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.971004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.971019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.971032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.971046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.342 [2024-07-15 22:12:38.971059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.971105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:03.342 [2024-07-15 22:12:38.971121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:03.342 [2024-07-15 22:12:38.971140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68720 len:8 PRP1 0x0 PRP2 0x0 00:18:03.342 [2024-07-15 22:12:38.971154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.971201] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b81b80 was disconnected and freed. reset controller. 00:18:03.342 [2024-07-15 22:12:38.971220] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:03.342 [2024-07-15 22:12:38.971275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.342 [2024-07-15 22:12:38.971296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.342 [2024-07-15 22:12:38.971310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.343 [2024-07-15 22:12:38.971323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:38.971337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.343 [2024-07-15 22:12:38.971350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:38.971363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.343 [2024-07-15 22:12:38.971376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:38.971389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:03.343 [2024-07-15 22:12:38.971423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b03e30 (9): Bad file descriptor 00:18:03.343 [2024-07-15 22:12:38.975402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:03.343 [2024-07-15 22:12:39.011512] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:03.343 [2024-07-15 22:12:43.616227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.616982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.616997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.617025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.617053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.617094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.617125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.617154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.617183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.617222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.617251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.617279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.617307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.617335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.617363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.617391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.343 [2024-07-15 22:12:43.617420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.343 [2024-07-15 22:12:43.617433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.617975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.617988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.344 [2024-07-15 22:12:43.618516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.344 [2024-07-15 22:12:43.618530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-07-15 22:12:43.618558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-07-15 22:12:43.618586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-07-15 22:12:43.618615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-07-15 22:12:43.618644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-07-15 22:12:43.618672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-07-15 22:12:43.618707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-07-15 22:12:43.618736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.345 [2024-07-15 22:12:43.618764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.618792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.618821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.618848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.618877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.618908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.618936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.618964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.618979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.618992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.345 [2024-07-15 22:12:43.619800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.345 [2024-07-15 22:12:43.619813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-07-15 22:12:43.619837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-07-15 22:12:43.619851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-07-15 22:12:43.619867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-07-15 22:12:43.619881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-07-15 22:12:43.619895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-07-15 22:12:43.619908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-07-15 22:12:43.619923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-07-15 22:12:43.619936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-07-15 22:12:43.619952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-07-15 22:12:43.619965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-07-15 22:12:43.619980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-07-15 22:12:43.619993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-07-15 22:12:43.620008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-07-15 22:12:43.620023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-07-15 22:12:43.620039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.346 [2024-07-15 22:12:43.620052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-07-15 22:12:43.620066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b90c30 is same with the state(5) to be set 00:18:03.346 [2024-07-15 22:12:43.620093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:03.346 [2024-07-15 22:12:43.620116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:03.346 [2024-07-15 22:12:43.620129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6488 len:8 PRP1 0x0 PRP2 0x0 00:18:03.346 [2024-07-15 22:12:43.620142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-07-15 22:12:43.620190] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b90c30 was disconnected and freed. reset controller. 00:18:03.346 [2024-07-15 22:12:43.620209] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:03.346 [2024-07-15 22:12:43.620266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.346 [2024-07-15 22:12:43.620287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-07-15 22:12:43.620302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.346 [2024-07-15 22:12:43.620315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-07-15 22:12:43.620351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.346 [2024-07-15 22:12:43.620365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-07-15 22:12:43.620378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.346 [2024-07-15 22:12:43.620391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.346 [2024-07-15 22:12:43.620404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:03.346 [2024-07-15 22:12:43.620439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b03e30 (9): Bad file descriptor 00:18:03.346 [2024-07-15 22:12:43.624392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:03.346 [2024-07-15 22:12:43.657453] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:03.346 00:18:03.346 Latency(us) 00:18:03.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.346 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:03.346 Verification LBA range: start 0x0 length 0x4000 00:18:03.346 NVMe0n1 : 15.01 8633.25 33.72 215.12 0.00 14431.91 640.47 50045.67 00:18:03.346 =================================================================================================================== 00:18:03.346 Total : 8633.25 33.72 215.12 0.00 14431.91 640.47 50045.67 00:18:03.346 Received shutdown signal, test time was about 15.000000 seconds 00:18:03.346 00:18:03.346 Latency(us) 00:18:03.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.346 =================================================================================================================== 00:18:03.346 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88114 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:03.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88114 /var/tmp/bdevperf.sock 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88114 ']' 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:03.346 22:12:49 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:03.346 [2024-07-15 22:12:49.987412] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:03.346 22:12:50 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:03.641 [2024-07-15 22:12:50.299671] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:03.641 22:12:50 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:03.899 NVMe0n1 00:18:03.899 22:12:50 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:04.157 00:18:04.157 22:12:50 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:04.415 00:18:04.415 22:12:51 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:04.415 22:12:51 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:04.982 22:12:51 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:04.982 22:12:51 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:08.265 22:12:54 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:08.265 22:12:54 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:08.265 22:12:55 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:08.265 22:12:55 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88243 00:18:08.265 22:12:55 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 88243 00:18:09.733 0 00:18:09.733 22:12:56 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:09.733 [2024-07-15 22:12:49.466867] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:18:09.733 [2024-07-15 22:12:49.467074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88114 ] 00:18:09.733 [2024-07-15 22:12:49.601630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.733 [2024-07-15 22:12:49.660271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.733 [2024-07-15 22:12:51.858103] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:09.733 [2024-07-15 22:12:51.858238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.733 [2024-07-15 22:12:51.858265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.733 [2024-07-15 22:12:51.858284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.733 [2024-07-15 22:12:51.858298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.733 [2024-07-15 22:12:51.858313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.733 [2024-07-15 22:12:51.858327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.733 [2024-07-15 22:12:51.858341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.733 [2024-07-15 22:12:51.858355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.733 [2024-07-15 22:12:51.858369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:09.733 [2024-07-15 22:12:51.858415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:09.733 [2024-07-15 22:12:51.858450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100ee30 (9): Bad file descriptor 00:18:09.733 [2024-07-15 22:12:51.861067] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:09.733 Running I/O for 1 seconds... 00:18:09.733 00:18:09.733 Latency(us) 00:18:09.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.733 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:09.733 Verification LBA range: start 0x0 length 0x4000 00:18:09.733 NVMe0n1 : 1.01 8701.72 33.99 0.00 0.00 14630.73 2129.92 15728.64 00:18:09.733 =================================================================================================================== 00:18:09.733 Total : 8701.72 33.99 0.00 0.00 14630.73 2129.92 15728.64 00:18:09.734 22:12:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:09.734 22:12:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:09.734 22:12:56 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:09.992 22:12:56 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:09.992 22:12:56 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:10.251 22:12:57 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:10.509 22:12:57 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:13.790 22:13:00 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:13.790 22:13:00 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:13.790 22:13:00 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 88114 00:18:13.790 22:13:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88114 ']' 00:18:13.790 22:13:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88114 00:18:13.790 22:13:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:13.790 22:13:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.790 22:13:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88114 00:18:13.790 22:13:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:13.790 22:13:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:13.790 22:13:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88114' 00:18:13.790 killing process with pid 88114 00:18:13.790 22:13:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88114 00:18:13.790 22:13:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88114 00:18:14.047 22:13:00 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:14.047 22:13:00 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.306 rmmod nvme_tcp 00:18:14.306 rmmod nvme_fabrics 00:18:14.306 rmmod nvme_keyring 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 87760 ']' 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 87760 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87760 ']' 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87760 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87760 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87760' 00:18:14.306 killing process with pid 87760 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87760 00:18:14.306 22:13:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87760 00:18:14.564 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:14.564 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:14.564 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:14.564 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.564 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.564 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.564 22:13:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.564 22:13:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.564 22:13:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:14.564 00:18:14.564 real 0m31.918s 00:18:14.564 user 2m4.701s 00:18:14.564 sys 0m4.371s 00:18:14.564 22:13:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:14.564 22:13:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:14.564 ************************************ 00:18:14.564 END TEST nvmf_failover 00:18:14.564 ************************************ 00:18:14.564 22:13:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:14.564 22:13:01 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:14.564 22:13:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:14.564 22:13:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.564 22:13:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:14.564 ************************************ 00:18:14.564 START TEST nvmf_host_discovery 00:18:14.564 ************************************ 00:18:14.564 22:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:14.823 * Looking for test storage... 00:18:14.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:14.823 Cannot find device "nvmf_tgt_br" 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.823 Cannot find device "nvmf_tgt_br2" 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:14.823 Cannot find device "nvmf_tgt_br" 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:14.823 Cannot find device "nvmf_tgt_br2" 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:14.823 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:14.824 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:14.824 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:14.824 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:15.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:18:15.081 00:18:15.081 --- 10.0.0.2 ping statistics --- 00:18:15.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.081 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:15.081 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:15.081 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:18:15.081 00:18:15.081 --- 10.0.0.3 ping statistics --- 00:18:15.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.081 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:15.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:15.081 00:18:15.081 --- 10.0.0.1 ping statistics --- 00:18:15.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.081 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88541 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88541 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88541 ']' 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.081 22:13:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:15.081 [2024-07-15 22:13:01.958773] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:18:15.081 [2024-07-15 22:13:01.958870] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.338 [2024-07-15 22:13:02.095349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.338 [2024-07-15 22:13:02.166211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.338 [2024-07-15 22:13:02.166273] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.338 [2024-07-15 22:13:02.166288] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.338 [2024-07-15 22:13:02.166298] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.338 [2024-07-15 22:13:02.166306] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.339 [2024-07-15 22:13:02.166340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.339 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.339 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:15.339 22:13:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:15.339 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:15.339 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:15.596 [2024-07-15 22:13:02.319941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:15.596 [2024-07-15 22:13:02.328098] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:15.596 null0 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:15.596 null1 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88582 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88582 /tmp/host.sock 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88582 ']' 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:15.596 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.596 22:13:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:15.596 [2024-07-15 22:13:02.415780] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:18:15.596 [2024-07-15 22:13:02.415877] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88582 ] 00:18:15.853 [2024-07-15 22:13:02.552634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.853 [2024-07-15 22:13:02.610553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.448 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.448 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:16.448 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:16.448 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:16.448 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.448 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.448 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.448 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:16.448 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.448 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:16.706 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.963 [2024-07-15 22:13:03.752535] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.963 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:18:17.221 22:13:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:18:17.479 [2024-07-15 22:13:04.413341] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:17.479 [2024-07-15 22:13:04.413383] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:17.479 [2024-07-15 22:13:04.413403] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:17.737 [2024-07-15 22:13:04.499509] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:17.737 [2024-07-15 22:13:04.556747] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:17.737 [2024-07-15 22:13:04.556797] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:18.304 22:13:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:18.304 22:13:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:18.304 22:13:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:18.304 22:13:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:18.304 22:13:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.304 22:13:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.304 22:13:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:18.304 22:13:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:18.304 22:13:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:18.304 22:13:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.304 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.304 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:18.304 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:18.304 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:18.304 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:18.305 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.565 [2024-07-15 22:13:05.325384] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:18.565 [2024-07-15 22:13:05.325993] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:18.565 [2024-07-15 22:13:05.326026] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:18.565 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.566 [2024-07-15 22:13:05.412531] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.566 [2024-07-15 22:13:05.476932] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:18.566 [2024-07-15 22:13:05.476981] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:18.566 [2024-07-15 22:13:05.476990] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:18.566 22:13:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.942 [2024-07-15 22:13:06.626690] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:19.942 [2024-07-15 22:13:06.626731] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:19.942 [2024-07-15 22:13:06.634661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.942 [2024-07-15 22:13:06.634703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.942 [2024-07-15 22:13:06.634718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.942 [2024-07-15 22:13:06.634729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.942 [2024-07-15 22:13:06.634739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.942 [2024-07-15 22:13:06.634748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.942 [2024-07-15 22:13:06.634759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.942 [2024-07-15 22:13:06.634768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.942 [2024-07-15 22:13:06.634778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dc50 is same with the state(5) to be set 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:19.942 [2024-07-15 22:13:06.644602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176dc50 (9): Bad file descriptor 00:18:19.942 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.942 [2024-07-15 22:13:06.654629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:19.942 [2024-07-15 22:13:06.654814] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.942 [2024-07-15 22:13:06.654854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x176dc50 with addr=10.0.0.2, port=4420 00:18:19.942 [2024-07-15 22:13:06.654875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dc50 is same with the state(5) to be set 00:18:19.942 [2024-07-15 22:13:06.654901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176dc50 (9): Bad file descriptor 00:18:19.942 [2024-07-15 22:13:06.654922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:19.942 [2024-07-15 22:13:06.654938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:19.942 [2024-07-15 22:13:06.654956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:19.942 [2024-07-15 22:13:06.654983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:19.942 [2024-07-15 22:13:06.664717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:19.943 [2024-07-15 22:13:06.664817] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.943 [2024-07-15 22:13:06.664840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x176dc50 with addr=10.0.0.2, port=4420 00:18:19.943 [2024-07-15 22:13:06.664852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dc50 is same with the state(5) to be set 00:18:19.943 [2024-07-15 22:13:06.664880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176dc50 (9): Bad file descriptor 00:18:19.943 [2024-07-15 22:13:06.664896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:19.943 [2024-07-15 22:13:06.664912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:19.943 [2024-07-15 22:13:06.664921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:19.943 [2024-07-15 22:13:06.664937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:19.943 [2024-07-15 22:13:06.674781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:19.943 [2024-07-15 22:13:06.674887] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.943 [2024-07-15 22:13:06.674911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x176dc50 with addr=10.0.0.2, port=4420 00:18:19.943 [2024-07-15 22:13:06.674923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dc50 is same with the state(5) to be set 00:18:19.943 [2024-07-15 22:13:06.674940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176dc50 (9): Bad file descriptor 00:18:19.943 [2024-07-15 22:13:06.674956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:19.943 [2024-07-15 22:13:06.674965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:19.943 [2024-07-15 22:13:06.674974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:19.943 [2024-07-15 22:13:06.674990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:19.943 [2024-07-15 22:13:06.684860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:19.943 [2024-07-15 22:13:06.685015] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.943 [2024-07-15 22:13:06.685051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x176dc50 with addr=10.0.0.2, port=4420 00:18:19.943 [2024-07-15 22:13:06.685072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dc50 is same with the state(5) to be set 00:18:19.943 [2024-07-15 22:13:06.685118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176dc50 (9): Bad file descriptor 00:18:19.943 [2024-07-15 22:13:06.685156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:19.943 [2024-07-15 22:13:06.685169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:19.943 [2024-07-15 22:13:06.685180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:19.943 [2024-07-15 22:13:06.685196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.943 [2024-07-15 22:13:06.694949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:19.943 [2024-07-15 22:13:06.695105] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.943 [2024-07-15 22:13:06.695130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x176dc50 with addr=10.0.0.2, port=4420 00:18:19.943 [2024-07-15 22:13:06.695143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dc50 is same with the state(5) to be set 00:18:19.943 [2024-07-15 22:13:06.695162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176dc50 (9): Bad file descriptor 00:18:19.943 [2024-07-15 22:13:06.695190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:19.943 [2024-07-15 22:13:06.695201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:19.943 [2024-07-15 22:13:06.695212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:19.943 [2024-07-15 22:13:06.695228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:19.943 [2024-07-15 22:13:06.705037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:19.943 [2024-07-15 22:13:06.705191] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:19.943 [2024-07-15 22:13:06.705224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x176dc50 with addr=10.0.0.2, port=4420 00:18:19.943 [2024-07-15 22:13:06.705244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176dc50 is same with the state(5) to be set 00:18:19.943 [2024-07-15 22:13:06.705270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176dc50 (9): Bad file descriptor 00:18:19.943 [2024-07-15 22:13:06.705295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:19.943 [2024-07-15 22:13:06.705312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:19.943 [2024-07-15 22:13:06.705329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:19.943 [2024-07-15 22:13:06.705354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:19.943 [2024-07-15 22:13:06.713769] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:19.943 [2024-07-15 22:13:06.713818] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:19.943 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:19.944 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.944 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.944 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:19.944 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:20.202 22:13:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.202 22:13:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:20.202 22:13:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:20.202 22:13:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:20.202 22:13:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:20.202 22:13:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:20.202 22:13:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.202 22:13:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.134 [2024-07-15 22:13:08.043806] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:21.134 [2024-07-15 22:13:08.043856] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:21.134 [2024-07-15 22:13:08.043876] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:21.392 [2024-07-15 22:13:08.129980] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:21.392 [2024-07-15 22:13:08.190916] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:21.392 [2024-07-15 22:13:08.190998] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.392 2024/07/15 22:13:08 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:21.392 request: 00:18:21.392 { 00:18:21.392 "method": "bdev_nvme_start_discovery", 00:18:21.392 "params": { 00:18:21.392 "name": "nvme", 00:18:21.392 "trtype": "tcp", 00:18:21.392 "traddr": "10.0.0.2", 00:18:21.392 "adrfam": "ipv4", 00:18:21.392 "trsvcid": "8009", 00:18:21.392 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:21.392 "wait_for_attach": true 00:18:21.392 } 00:18:21.392 } 00:18:21.392 Got JSON-RPC error response 00:18:21.392 GoRPCClient: error on JSON-RPC call 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.392 2024/07/15 22:13:08 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:21.392 request: 00:18:21.392 { 00:18:21.392 "method": "bdev_nvme_start_discovery", 00:18:21.392 "params": { 00:18:21.392 "name": "nvme_second", 00:18:21.392 "trtype": "tcp", 00:18:21.392 "traddr": "10.0.0.2", 00:18:21.392 "adrfam": "ipv4", 00:18:21.392 "trsvcid": "8009", 00:18:21.392 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:21.392 "wait_for_attach": true 00:18:21.392 } 00:18:21.392 } 00:18:21.392 Got JSON-RPC error response 00:18:21.392 GoRPCClient: error on JSON-RPC call 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.392 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.651 22:13:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:22.584 [2024-07-15 22:13:09.467448] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:22.584 [2024-07-15 22:13:09.467524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1769f00 with addr=10.0.0.2, port=8010 00:18:22.584 [2024-07-15 22:13:09.467546] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:22.584 [2024-07-15 22:13:09.467558] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:22.584 [2024-07-15 22:13:09.467567] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:23.518 [2024-07-15 22:13:10.467447] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:23.518 [2024-07-15 22:13:10.467526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1769f00 with addr=10.0.0.2, port=8010 00:18:23.518 [2024-07-15 22:13:10.467549] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:23.518 [2024-07-15 22:13:10.467560] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:23.518 [2024-07-15 22:13:10.467569] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:24.891 [2024-07-15 22:13:11.467273] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:24.891 2024/07/15 22:13:11 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:18:24.891 request: 00:18:24.891 { 00:18:24.891 "method": "bdev_nvme_start_discovery", 00:18:24.891 "params": { 00:18:24.891 "name": "nvme_second", 00:18:24.891 "trtype": "tcp", 00:18:24.891 "traddr": "10.0.0.2", 00:18:24.891 "adrfam": "ipv4", 00:18:24.891 "trsvcid": "8010", 00:18:24.891 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:24.891 "wait_for_attach": false, 00:18:24.891 "attach_timeout_ms": 3000 00:18:24.891 } 00:18:24.891 } 00:18:24.891 Got JSON-RPC error response 00:18:24.891 GoRPCClient: error on JSON-RPC call 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88582 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:24.891 rmmod nvme_tcp 00:18:24.891 rmmod nvme_fabrics 00:18:24.891 rmmod nvme_keyring 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88541 ']' 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88541 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 88541 ']' 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 88541 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88541 00:18:24.891 killing process with pid 88541 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88541' 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 88541 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 88541 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.891 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:25.150 ************************************ 00:18:25.150 END TEST nvmf_host_discovery 00:18:25.150 ************************************ 00:18:25.150 00:18:25.150 real 0m10.386s 00:18:25.150 user 0m21.012s 00:18:25.150 sys 0m1.519s 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.150 22:13:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:25.150 22:13:11 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:25.150 22:13:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:25.150 22:13:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.150 22:13:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:25.150 ************************************ 00:18:25.150 START TEST nvmf_host_multipath_status 00:18:25.150 ************************************ 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:25.150 * Looking for test storage... 00:18:25.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.150 22:13:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:25.150 Cannot find device "nvmf_tgt_br" 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:25.150 Cannot find device "nvmf_tgt_br2" 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:25.150 Cannot find device "nvmf_tgt_br" 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:25.150 Cannot find device "nvmf_tgt_br2" 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:25.150 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:25.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:25.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:25.408 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:25.666 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:25.666 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:25.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:18:25.666 00:18:25.666 --- 10.0.0.2 ping statistics --- 00:18:25.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.666 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:25.666 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:25.666 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:25.666 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:18:25.666 00:18:25.666 --- 10.0.0.3 ping statistics --- 00:18:25.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.667 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:25.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:18:25.667 00:18:25.667 --- 10.0.0.1 ping statistics --- 00:18:25.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.667 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=89068 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 89068 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89068 ']' 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.667 22:13:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:25.667 [2024-07-15 22:13:12.459584] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:18:25.667 [2024-07-15 22:13:12.459690] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.667 [2024-07-15 22:13:12.601211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:25.925 [2024-07-15 22:13:12.669187] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.925 [2024-07-15 22:13:12.669251] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.925 [2024-07-15 22:13:12.669264] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.925 [2024-07-15 22:13:12.669275] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.925 [2024-07-15 22:13:12.669283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.925 [2024-07-15 22:13:12.669755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.925 [2024-07-15 22:13:12.669805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.496 22:13:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.496 22:13:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:26.496 22:13:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:26.496 22:13:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:26.496 22:13:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:26.753 22:13:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.753 22:13:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89068 00:18:26.753 22:13:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:26.753 [2024-07-15 22:13:13.683778] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.011 22:13:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:27.269 Malloc0 00:18:27.269 22:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:27.528 22:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:27.785 22:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.786 [2024-07-15 22:13:14.731770] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.043 22:13:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:28.302 [2024-07-15 22:13:15.075966] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:28.302 22:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89166 00:18:28.302 22:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.302 22:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:28.302 22:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89166 /var/tmp/bdevperf.sock 00:18:28.302 22:13:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89166 ']' 00:18:28.302 22:13:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.302 22:13:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.302 22:13:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.302 22:13:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.302 22:13:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:28.560 22:13:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.560 22:13:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:28.560 22:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:28.819 22:13:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:29.410 Nvme0n1 00:18:29.410 22:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:29.668 Nvme0n1 00:18:29.668 22:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:29.668 22:13:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:31.564 22:13:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:31.564 22:13:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:32.129 22:13:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:32.386 22:13:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:33.317 22:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:33.317 22:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:33.317 22:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.317 22:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:33.575 22:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:33.575 22:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:33.575 22:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.575 22:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:33.835 22:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:33.835 22:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:33.835 22:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.835 22:13:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:34.406 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:34.406 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:34.406 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:34.406 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.406 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:34.406 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:34.406 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.406 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:34.663 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:34.663 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:34.663 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.663 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:34.921 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:34.921 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:34.921 22:13:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:35.487 22:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:35.487 22:13:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:36.862 22:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:36.862 22:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:36.862 22:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.862 22:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:36.862 22:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:36.862 22:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:36.862 22:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.862 22:13:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:37.427 22:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.427 22:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:37.427 22:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.427 22:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:37.685 22:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.685 22:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:37.685 22:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.685 22:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:37.942 22:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.942 22:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:37.942 22:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.942 22:13:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:38.200 22:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:38.200 22:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:38.458 22:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.458 22:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:38.715 22:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:38.715 22:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:38.715 22:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:38.973 22:13:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:39.231 22:13:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:40.162 22:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:40.162 22:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:40.162 22:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.162 22:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:40.726 22:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.726 22:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:40.726 22:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.727 22:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:40.984 22:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:40.984 22:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:40.984 22:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.984 22:13:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:41.241 22:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:41.241 22:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:41.241 22:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.241 22:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:41.807 22:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:41.807 22:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:41.807 22:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.807 22:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:41.807 22:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:41.807 22:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:42.066 22:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:42.066 22:13:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:42.324 22:13:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:42.324 22:13:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:42.324 22:13:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:42.583 22:13:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:42.840 22:13:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:43.775 22:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:43.775 22:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:43.775 22:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:43.775 22:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:44.033 22:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.033 22:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:44.033 22:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.033 22:13:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:44.292 22:13:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:44.292 22:13:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:44.292 22:13:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.292 22:13:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:44.549 22:13:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.550 22:13:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:44.550 22:13:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.550 22:13:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:45.115 22:13:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:45.115 22:13:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:45.115 22:13:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:45.115 22:13:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:45.115 22:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:45.115 22:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:45.115 22:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:45.115 22:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:45.373 22:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:45.374 22:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:45.374 22:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:45.631 22:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:46.197 22:13:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:47.131 22:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:47.131 22:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:47.131 22:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.131 22:13:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:47.390 22:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:47.390 22:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:47.390 22:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.390 22:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:47.649 22:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:47.649 22:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:47.649 22:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.649 22:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:47.907 22:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.907 22:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:47.907 22:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.907 22:13:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:48.165 22:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:48.165 22:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:48.165 22:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.165 22:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:48.732 22:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:48.732 22:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:48.732 22:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.732 22:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:48.732 22:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:48.732 22:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:48.732 22:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:48.990 22:13:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:49.557 22:13:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:50.489 22:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:50.489 22:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:50.489 22:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.489 22:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:50.746 22:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:50.746 22:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:50.746 22:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:50.746 22:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.011 22:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.011 22:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:51.011 22:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.011 22:13:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:51.281 22:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.281 22:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:51.281 22:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.281 22:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:51.846 22:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.846 22:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:51.846 22:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.846 22:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:51.846 22:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:51.846 22:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:51.846 22:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.846 22:13:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:52.103 22:13:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.103 22:13:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:52.670 22:13:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:52.670 22:13:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:52.927 22:13:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:53.183 22:13:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:54.115 22:13:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:54.115 22:13:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:54.115 22:13:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:54.115 22:13:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.374 22:13:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.374 22:13:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:54.374 22:13:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.374 22:13:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:54.631 22:13:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.631 22:13:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:54.631 22:13:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.631 22:13:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:54.889 22:13:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.889 22:13:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:54.889 22:13:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.889 22:13:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:55.151 22:13:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.152 22:13:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:55.152 22:13:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.152 22:13:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:55.420 22:13:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.420 22:13:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:55.420 22:13:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.420 22:13:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:55.701 22:13:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.701 22:13:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:55.701 22:13:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:55.959 22:13:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:56.525 22:13:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:57.458 22:13:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:57.458 22:13:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:57.458 22:13:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.458 22:13:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:57.715 22:13:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:57.715 22:13:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:57.715 22:13:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.715 22:13:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:57.973 22:13:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.973 22:13:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:57.973 22:13:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.973 22:13:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:58.231 22:13:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.231 22:13:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:58.231 22:13:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.231 22:13:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:58.489 22:13:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.489 22:13:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:58.489 22:13:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.489 22:13:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:59.056 22:13:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.056 22:13:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:59.056 22:13:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.056 22:13:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:59.314 22:13:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.314 22:13:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:59.314 22:13:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:59.572 22:13:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:59.831 22:13:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:01.213 22:13:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:01.213 22:13:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:01.213 22:13:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:01.213 22:13:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.213 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.213 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:01.213 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.213 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:01.779 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.779 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:01.779 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.779 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:02.037 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.037 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:02.037 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.037 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:02.037 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.037 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:02.037 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:02.037 22:13:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.295 22:13:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.295 22:13:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:02.295 22:13:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.295 22:13:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:02.554 22:13:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.554 22:13:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:02.554 22:13:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:03.121 22:13:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:03.380 22:13:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:04.316 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:04.316 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:04.316 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.316 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:04.575 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.575 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:04.575 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:04.575 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.834 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:04.834 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:04.834 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.834 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:05.093 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.093 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:05.093 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.093 22:13:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:05.351 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.351 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:05.351 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.351 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89166 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89166 ']' 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89166 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89166 00:19:05.916 killing process with pid 89166 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89166' 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89166 00:19:05.916 22:13:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89166 00:19:06.193 Connection closed with partial response: 00:19:06.193 00:19:06.193 00:19:06.193 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89166 00:19:06.193 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:06.193 [2024-07-15 22:13:15.144546] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:19:06.193 [2024-07-15 22:13:15.144652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89166 ] 00:19:06.193 [2024-07-15 22:13:15.277467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.193 [2024-07-15 22:13:15.337267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.193 Running I/O for 90 seconds... 00:19:06.193 [2024-07-15 22:13:32.507543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.507624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.507666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.507686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.507710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.507728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.507751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.507768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.507791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.507808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.507831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.507848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.507871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.507888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.507911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.507927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.507950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.507967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.507990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.508007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.508030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.508066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.508109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.508129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.508152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.508179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.508206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.508222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.508245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.508262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.508285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.508301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.508324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.508340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.508363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.508379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.508402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.508419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.508442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.508459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.508482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.508500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.508523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.508541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.508564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.193 [2024-07-15 22:13:32.508580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:06.193 [2024-07-15 22:13:32.508616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.508633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.508656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.508683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.508706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.508723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.508745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.508762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.508785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.508802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.508825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.508841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.508865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.508881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.508905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.508921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.508944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.508961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.508984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.509611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.509636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.194 [2024-07-15 22:13:32.510370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.510418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.510460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.510500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.510540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.510581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.510621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.510661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.510701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.510741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.510781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.510836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.510881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.510921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.510961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.510984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.511001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.511025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.194 [2024-07-15 22:13:32.511041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.194 [2024-07-15 22:13:32.511064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.511967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.511991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.512008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.195 [2024-07-15 22:13:32.512048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.195 [2024-07-15 22:13:32.512830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:06.195 [2024-07-15 22:13:32.512854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.512870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.512894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.512918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.512943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.512960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.512983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.513000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.513024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.513041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.513064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.513091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.513119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.513137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.513160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.513177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.513200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.513217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.513241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.513257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.513280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.513297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.513320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.513337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.513361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.513377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.513400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.513417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.513454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.513472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.514969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.514992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.515009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.515032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.515049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.515072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.515105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.515131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.515149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.515172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.515189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.515212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.515230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.515253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.515279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.515304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.515321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.515345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.515362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.515385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.196 [2024-07-15 22:13:32.515401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:06.196 [2024-07-15 22:13:32.515424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.515441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.515464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.515481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.515504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.515521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.515544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.515561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.515585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.515601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.515624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.515641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.515664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.515681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.515704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.515721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.515744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.515768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.515793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.515813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.515837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.515853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.515877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.515894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.515917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.515934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.515957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.515974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.515997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.516014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.516037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.516054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.516077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.516108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.516133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.516150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.516183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.516202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.516226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.516243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.516267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.516284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.516316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.516334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.516357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.516374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.516397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.516414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.516437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.516454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.516478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.516496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.517202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.517231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.517260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.197 [2024-07-15 22:13:32.517278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.517302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.197 [2024-07-15 22:13:32.517319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.517342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.197 [2024-07-15 22:13:32.517359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.517383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.197 [2024-07-15 22:13:32.517400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.517423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.197 [2024-07-15 22:13:32.517440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.517464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.197 [2024-07-15 22:13:32.517480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:06.197 [2024-07-15 22:13:32.517518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.517536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.517560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.517577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.517600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.517617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.517640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.517657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.517681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.517698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.517721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.517738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.517761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.517779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.517802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.517819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.517842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.517859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.517883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.517900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.517923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.517940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.517963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.517980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.198 [2024-07-15 22:13:32.518945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.518969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.198 [2024-07-15 22:13:32.518986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.519011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.198 [2024-07-15 22:13:32.519035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.519060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.198 [2024-07-15 22:13:32.519077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.519118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.198 [2024-07-15 22:13:32.519136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.519159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.198 [2024-07-15 22:13:32.519176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.519199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.198 [2024-07-15 22:13:32.519215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.519238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.198 [2024-07-15 22:13:32.519255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:06.198 [2024-07-15 22:13:32.519278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.519965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.519981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.520005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.520021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.520052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.520070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.520106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.520125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.520148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.520165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.520204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.520221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.520245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.520262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.520286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.520303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.199 [2024-07-15 22:13:32.521884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:06.199 [2024-07-15 22:13:32.521907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.521924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.521948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.521976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.522975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.522998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.523023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.523048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.523066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.523103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.523123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.523147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.523164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.523187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.523204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.523227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.523244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.523267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.523283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.523308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.523327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.523352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.523369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.523393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.523410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.524122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.524151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.524194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.524215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.524240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.200 [2024-07-15 22:13:32.524272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.524297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.200 [2024-07-15 22:13:32.524315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.524338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.200 [2024-07-15 22:13:32.524355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.524379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.200 [2024-07-15 22:13:32.524396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:06.200 [2024-07-15 22:13:32.538681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.538749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.538784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.538807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.538837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.538857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.538886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.538907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.538936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.538957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.538986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.539956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.539985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.540006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.540035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.540055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.540098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.540122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.540152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.540190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.540222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.540244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.540274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.540294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.540323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.540343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.540372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.540393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.540422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.540443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.540471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.540492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.540533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.540555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.540585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.201 [2024-07-15 22:13:32.540606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:06.201 [2024-07-15 22:13:32.540634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.202 [2024-07-15 22:13:32.540655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.540685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.202 [2024-07-15 22:13:32.540706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.540736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.540757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.540787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.540808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.540837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.540858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.540887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.540908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.540937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.540958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.540987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.541973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.541994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.542023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.542043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.542072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.542107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.542138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.542159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.542189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.542209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.542238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.542258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.542289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.542310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.543519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.543558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.543599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.543622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.543669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.543691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.543721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.543742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.543772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.543793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.543822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.543842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.543872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.543892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.543921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.543942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.543971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.202 [2024-07-15 22:13:32.543992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.202 [2024-07-15 22:13:32.544021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.544969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.544989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.545959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.545979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.546007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.546027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.546055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.546074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:06.203 [2024-07-15 22:13:32.546118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.203 [2024-07-15 22:13:32.546140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.546179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.204 [2024-07-15 22:13:32.546199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.546228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.204 [2024-07-15 22:13:32.546247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.204 [2024-07-15 22:13:32.547132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.204 [2024-07-15 22:13:32.547189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.204 [2024-07-15 22:13:32.547238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.204 [2024-07-15 22:13:32.547287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.547336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.547385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.547433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.547480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.547529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.547577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.547644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.547694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.547743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.547791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.547839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.547887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.547936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.547964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.547988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.548959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.548979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.549007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.549027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.549055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.549078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:06.204 [2024-07-15 22:13:32.549123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.204 [2024-07-15 22:13:32.549145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.549180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.205 [2024-07-15 22:13:32.549215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.549261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.205 [2024-07-15 22:13:32.549294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.549339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.205 [2024-07-15 22:13:32.549371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.549413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.205 [2024-07-15 22:13:32.549446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.549491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.549521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.549561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.549589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.549628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.549657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.549718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.549749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.549792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.549825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.549870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.549909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.549953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.549983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.550977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.550997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.551024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.551044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.551073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.551126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.551160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.551181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.551209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.551230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.551258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.551289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.551318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.551338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.551367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.551387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:06.205 [2024-07-15 22:13:32.552513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.205 [2024-07-15 22:13:32.552551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.552589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.552611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.552640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.552660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.552688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.552707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.552735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.552755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.552783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.552802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.552831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.552851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.552879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.552899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.552928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.552948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.552976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.553952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.553972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.554000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.554020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.554048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.554068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.554124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.554149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.554177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.554197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.554225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.554245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.554285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.554306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.554333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.554353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.554382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.554401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.554429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.554449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.554477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.554497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.554525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.554545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.554573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.554593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:06.206 [2024-07-15 22:13:32.554621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.206 [2024-07-15 22:13:32.554641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.554669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.554689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.554717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.554737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.554765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.554785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.554812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.554832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.554860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.554888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.554917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.554937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.554965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.554985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.555013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.555033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.555060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.555102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.555149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.555172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.555201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.555221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.556142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.556221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.556271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.556320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.207 [2024-07-15 22:13:32.556372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.556436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.556487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.556535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.556582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.556631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.556678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.556736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.556775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.556814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.556853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.556893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.556932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.556955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.556971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.557002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.557019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.557042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.557059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.557084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.557125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.557155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.557172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.557196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.557213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.557236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.557253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.557276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.557293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.557316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.557333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.557356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.557372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.557395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.557412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.557435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.557451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.557474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.557490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:06.207 [2024-07-15 22:13:32.557523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.207 [2024-07-15 22:13:32.557541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.557564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.208 [2024-07-15 22:13:32.557580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.557603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.208 [2024-07-15 22:13:32.557619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.557642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.208 [2024-07-15 22:13:32.557659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.557682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.208 [2024-07-15 22:13:32.557698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.557722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.208 [2024-07-15 22:13:32.557738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.557761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.208 [2024-07-15 22:13:32.557778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.557801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.208 [2024-07-15 22:13:32.557817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.557841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.208 [2024-07-15 22:13:32.557857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.557881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.208 [2024-07-15 22:13:32.557897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.557920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.208 [2024-07-15 22:13:32.557937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.557960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.208 [2024-07-15 22:13:32.557976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.557999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.208 [2024-07-15 22:13:32.558022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.558046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.208 [2024-07-15 22:13:32.558063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.558106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.208 [2024-07-15 22:13:32.558130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.558155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.558172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.558195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.558212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.558235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.558251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.558274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.558291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.558314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.558330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.558353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.558369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.569572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.569612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.569641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.569660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.569683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.569700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.569724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.569756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.569782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.569799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.569822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.569839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.569862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.569878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.569901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.569918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.569941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.569957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.569980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.569996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.570019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.570035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.570058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.570075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.570121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.570138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.570162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.570179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.570202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.570218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.570241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.570267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.570293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.570310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.570333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.570349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.570373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.570390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.570414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.570430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.570453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.570470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:06.208 [2024-07-15 22:13:32.570493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.208 [2024-07-15 22:13:32.570510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.570534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.570550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.570892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.570923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.570978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.571968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.571985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.572941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.572963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.573001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.573023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.573062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.573084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:06.209 [2024-07-15 22:13:32.573139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.209 [2024-07-15 22:13:32.573163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:32.573201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:32.573223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:32.573276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:32.573299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:32.573338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:32.573361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:32.573399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:32.573422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:32.573460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:32.573482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:32.573520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:32.573542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:32.573581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:32.573603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:32.573642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:32.573664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:32.573702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:32.573724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:32.573763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:32.573785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:32.573823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:32.573845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:32.573884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:32.573907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:32.574130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:32.574162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.052576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.052681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.052735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.052762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.052795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.052820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.052851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.052874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.052905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.052930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.052965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.052989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.053047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.053126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.053201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.053258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.053318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.053371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.053461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.053519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.053572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.053629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.053697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.053756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.053872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.053951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.053984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.054010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.054066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.054116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:06.210 [2024-07-15 22:13:50.054154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.210 [2024-07-15 22:13:50.054181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.054226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.054252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.054286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.054329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.054365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.054392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.054425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.054451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.054484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.054509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.054561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.054591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.054627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.054654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.054709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.054738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.054799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.054829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.054864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.054890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.054924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.054949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.054984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.055011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.055073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.055156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.055234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.055293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.055363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.055424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.055485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.055544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.055600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.055655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.055709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.055766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.055824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.055882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.055956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.055990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.056015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.056049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.056074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.056132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.056163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.059020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.059064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.059133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.059165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.059224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.059256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.059328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.059358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.059395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.211 [2024-07-15 22:13:50.059420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.059462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.059489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.059538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.059566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.059601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.059627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.059661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.059711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.059750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.059777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.059811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.059837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:06.211 [2024-07-15 22:13:50.059870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.211 [2024-07-15 22:13:50.059895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.059928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.059956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.059993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.060021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.060101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.060173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.060277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.060340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.060400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.060460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.060536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.060597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.060657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.060715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.060775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.060835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.060896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.060958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.060991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.061021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.061077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.061158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.061221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.061279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.061355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.061413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.061473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.061533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.061592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.061645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.061699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.061753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.061806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.061864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.061924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.061994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.062025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.064207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.064250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.064291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.064318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.064350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.064373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.064405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.064430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.064463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.064489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.064524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.064548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.064581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.212 [2024-07-15 22:13:50.064609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.064667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.064697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:06.212 [2024-07-15 22:13:50.064739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.212 [2024-07-15 22:13:50.064766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.064802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.064828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.064866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.064893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.064936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.064965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.065046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.065127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.065192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.065252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.065310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.065371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.065436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.065499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.065584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.065651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.065712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.065771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.065846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.065909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.065943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.065968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.066028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.066106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.066173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.066235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.066295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.066355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.066411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.066466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.066523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.066594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.066656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.066713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.066771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.066830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.066890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.066924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.066948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.069742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.069798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.069843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.069870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.069906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.069932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.069964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.069990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.070039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.070069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.070131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.213 [2024-07-15 22:13:50.070161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.070216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.070254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:06.213 [2024-07-15 22:13:50.070288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.213 [2024-07-15 22:13:50.070315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.070367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.070396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.070431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.070456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.070490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.070515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.070549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.070576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.070610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.070636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.070670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.070695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.070728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.070752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.070788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.070815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.070852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.070878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.070912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.070938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.071035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.071119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.071184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.071246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.071308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.071366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.071436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.071495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.071555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.071614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.071676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.071736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.071811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.071871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.071926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.071957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.071981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.072014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.214 [2024-07-15 22:13:50.072039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.072073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.072123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.072161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.072188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.072240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.072267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.072300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.072325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.072361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.072389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.072422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.072462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.072498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.072524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.072557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.072599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:06.214 [2024-07-15 22:13:50.072640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.214 [2024-07-15 22:13:50.072669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.072706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.072734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.072771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.072800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.072838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.072868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.076850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.076909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.076957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.076984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.077016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.077039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.077071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.077119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.077158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.077186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.077223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.077252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.077304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.077401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.077442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.077468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.077524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.077573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.077613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.077639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.077673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.077698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.077733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.077759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.077796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.077822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.077857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.077883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.077917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.077951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.077984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.078037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.078118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.078187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.078265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.078330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.078406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.078468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.078531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.078590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.078649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.078708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.078768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.078828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.078890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.078951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.078985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.079010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.079044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.079068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.079120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.079162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.079198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.079223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.079256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.079282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.079315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.079339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.079373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.215 [2024-07-15 22:13:50.079398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.080478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.080534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:06.215 [2024-07-15 22:13:50.080581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.215 [2024-07-15 22:13:50.080609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.080644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.080669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.080704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.080732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.080769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.080796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.080831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.080857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.080898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.080927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.080965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.081011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.081048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.081073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.081129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.081158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.081196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.081222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.081257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.081282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.081316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.081341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.081375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.081400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.081434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.081459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.081495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.081520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.081556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.081582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.083037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.083110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.083161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.083193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.083230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.083256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.083309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.083336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.083370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.083395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.083427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.083455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.083493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.083532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.083567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.083593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.083643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.083684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.083727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.083754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.083787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.083813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.083848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.083873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.083907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.083934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.083969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.083996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.084030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.084056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.084134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.084164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.084211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.084241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.084276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.084302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.084336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.084363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.084398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.216 [2024-07-15 22:13:50.084423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.084456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.084480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.084513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.084538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.084569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.084593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.084635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.084658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.084691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.084717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.084751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.084776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:06.216 [2024-07-15 22:13:50.084808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.216 [2024-07-15 22:13:50.084833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.084867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.084906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.084941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.084966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.084999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.085027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.085062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.085105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.085140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.085167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.085201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.085225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.085256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.085280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.085313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.085338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.085371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.085397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.085429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.085453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.085506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.085536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.085614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.085642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.088236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.088313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.088363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.088392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.088432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.088460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.088498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.088525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.088560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.088586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.088620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.088645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.088680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.088707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.088742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.088767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.088802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.088827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.088860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.088886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.088924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.088952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.088986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.089012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.089127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.089217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.089278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.089339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.089401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.089461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.089521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.089581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.089652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.089712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.089773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.089833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.089891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.089963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.089995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.090021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.090055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.090095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.090135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.217 [2024-07-15 22:13:50.090162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:06.217 [2024-07-15 22:13:50.090205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.217 [2024-07-15 22:13:50.090232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.091406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.091463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.091512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.091541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.091577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.091603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.091637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.091663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.091697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.091725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.091763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.091790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.091825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.091851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.091907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.091965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.092007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.092033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.092068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.092113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.092151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.092180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.092235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.218 [2024-07-15 22:13:50.092263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.092299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.218 [2024-07-15 22:13:50.092326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.093078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.218 [2024-07-15 22:13:50.093151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.093200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.218 [2024-07-15 22:13:50.093228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.093264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.093290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.093325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.093351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.093384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.093407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.093438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.218 [2024-07-15 22:13:50.093462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.093493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.218 [2024-07-15 22:13:50.093533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.093566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.093589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.093621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.218 [2024-07-15 22:13:50.093654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.093686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.093711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.093746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.093770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.093804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.093831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.093879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.218 [2024-07-15 22:13:50.093908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.093959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.218 [2024-07-15 22:13:50.093987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.094023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.218 [2024-07-15 22:13:50.094049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.094103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.094133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.094175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.218 [2024-07-15 22:13:50.094204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.094838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.218 [2024-07-15 22:13:50.094892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.094939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.094984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.095023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.095051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.095103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.218 [2024-07-15 22:13:50.095133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:06.218 [2024-07-15 22:13:50.095171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.095198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.095232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.095258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.095292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.095318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.095353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.095390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.095425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.095451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.095487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.095513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.095981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.096035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.096100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.096130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.096169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.096213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.096249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.096274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.096326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.096351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.096380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.096403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.096434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.096467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.096498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.096521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.096551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.096575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.096609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.096634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.096666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.096690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.096743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.096773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.096832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.096867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.096907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.096937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.097858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.097915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.097963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.097992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.098046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.098073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.098132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.098162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.098197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.098223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.098257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.098284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.098320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.098346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.098381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.098406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.098439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.098464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.098496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.098521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.098552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.098576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.098608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.098634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.098665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.098688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.098721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.098747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.098782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.219 [2024-07-15 22:13:50.098822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.103217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.103312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.103375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.103410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.103452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.103483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.103523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.103552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.103590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.103620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.103657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.103683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.103715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.219 [2024-07-15 22:13:50.103739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:06.219 [2024-07-15 22:13:50.103773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.103797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.103829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.103854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.103886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.220 [2024-07-15 22:13:50.103910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.103941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.220 [2024-07-15 22:13:50.103964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.103997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79768 len:8 SGL DATA BL 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:06.220 OCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.104048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.104148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.104211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.104256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.220 [2024-07-15 22:13:50.104286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.104324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.220 [2024-07-15 22:13:50.104351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.104393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.220 [2024-07-15 22:13:50.104434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.104492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.220 [2024-07-15 22:13:50.104524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.104561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.220 [2024-07-15 22:13:50.104587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.104622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.220 [2024-07-15 22:13:50.104647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.104681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.220 [2024-07-15 22:13:50.104710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.105790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.220 [2024-07-15 22:13:50.105848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.105896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.105925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.105963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.105987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.106019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.106044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.106115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.106144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.106180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.106205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.106239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.106265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.106299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.106324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.106357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.106382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.106416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.106442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.106477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.106502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.106536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.106569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.106608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.106633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.106665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.106689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.106721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.106745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.106779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.106803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.107398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.107453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.107500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.107529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.107565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.107593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.107630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.107657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.107696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.107722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.107756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.107819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.107859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.107886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.107942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.107972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.108021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.108051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.108106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.108137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.108177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.108221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.108269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.108297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:06.220 [2024-07-15 22:13:50.108333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.220 [2024-07-15 22:13:50.108377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.108414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.221 [2024-07-15 22:13:50.108440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.108475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.221 [2024-07-15 22:13:50.108500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.108534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.221 [2024-07-15 22:13:50.108560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.108595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.221 [2024-07-15 22:13:50.108621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.108656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.221 [2024-07-15 22:13:50.108682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.108718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.221 [2024-07-15 22:13:50.108743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.108777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.221 [2024-07-15 22:13:50.108802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.108836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.221 [2024-07-15 22:13:50.108860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.108892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.221 [2024-07-15 22:13:50.108915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.108945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.221 [2024-07-15 22:13:50.108969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.109003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.221 [2024-07-15 22:13:50.109028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.109062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.221 [2024-07-15 22:13:50.109119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.109963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.221 [2024-07-15 22:13:50.110019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.110110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.221 [2024-07-15 22:13:50.110147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.110188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.221 [2024-07-15 22:13:50.110215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.110250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.221 [2024-07-15 22:13:50.110276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.110311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.221 [2024-07-15 22:13:50.110337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.110373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.221 [2024-07-15 22:13:50.110399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.110433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.221 [2024-07-15 22:13:50.110459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.110492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.221 [2024-07-15 22:13:50.110517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.110553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.221 [2024-07-15 22:13:50.110581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:06.221 [2024-07-15 22:13:50.110618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.221 [2024-07-15 22:13:50.110645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:06.221 Received shutdown signal, test time was about 36.291140 seconds 00:19:06.221 00:19:06.221 Latency(us) 00:19:06.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.221 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:06.221 Verification LBA range: start 0x0 length 0x4000 00:19:06.221 Nvme0n1 : 36.29 8156.60 31.86 0.00 0.00 15660.19 139.64 4087539.90 00:19:06.221 =================================================================================================================== 00:19:06.221 Total : 8156.60 31.86 0.00 0.00 15660.19 139.64 4087539.90 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:06.482 rmmod nvme_tcp 00:19:06.482 rmmod nvme_fabrics 00:19:06.482 rmmod nvme_keyring 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 89068 ']' 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 89068 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89068 ']' 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89068 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89068 00:19:06.482 killing process with pid 89068 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:06.482 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:06.483 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89068' 00:19:06.483 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89068 00:19:06.483 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89068 00:19:06.750 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:06.750 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:06.750 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:06.750 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.750 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:06.750 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.750 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.750 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.750 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:06.750 00:19:06.750 real 0m41.711s 00:19:06.750 user 2m17.752s 00:19:06.750 sys 0m10.171s 00:19:06.750 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:06.750 22:13:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:06.750 ************************************ 00:19:06.750 END TEST nvmf_host_multipath_status 00:19:06.750 ************************************ 00:19:06.750 22:13:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:06.750 22:13:53 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:06.750 22:13:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:06.750 22:13:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:06.750 22:13:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.750 ************************************ 00:19:06.750 START TEST nvmf_discovery_remove_ifc 00:19:06.750 ************************************ 00:19:06.750 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:07.007 * Looking for test storage... 00:19:07.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:07.008 Cannot find device "nvmf_tgt_br" 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:07.008 Cannot find device "nvmf_tgt_br2" 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:07.008 Cannot find device "nvmf_tgt_br" 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:07.008 Cannot find device "nvmf_tgt_br2" 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:07.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:07.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:07.008 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:07.267 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:07.267 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:07.267 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:07.267 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:07.267 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:07.267 22:13:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:07.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:19:07.267 00:19:07.267 --- 10.0.0.2 ping statistics --- 00:19:07.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.267 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:07.267 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:07.267 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:19:07.267 00:19:07.267 --- 10.0.0.3 ping statistics --- 00:19:07.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.267 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:07.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:07.267 00:19:07.267 --- 10.0.0.1 ping statistics --- 00:19:07.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.267 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=90478 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 90478 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90478 ']' 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.267 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:07.267 [2024-07-15 22:13:54.183264] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:19:07.267 [2024-07-15 22:13:54.183350] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.525 [2024-07-15 22:13:54.317282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.525 [2024-07-15 22:13:54.376018] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.525 [2024-07-15 22:13:54.376073] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.525 [2024-07-15 22:13:54.376098] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.525 [2024-07-15 22:13:54.376107] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.525 [2024-07-15 22:13:54.376114] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.525 [2024-07-15 22:13:54.376154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.525 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:07.525 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:19:07.525 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:07.525 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:07.525 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:07.784 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.784 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:07.784 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.784 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:07.784 [2024-07-15 22:13:54.511250] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.784 [2024-07-15 22:13:54.519357] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:07.784 null0 00:19:07.784 [2024-07-15 22:13:54.551361] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.784 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.784 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90514 00:19:07.784 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:07.784 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90514 /tmp/host.sock 00:19:07.784 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90514 ']' 00:19:07.784 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:19:07.784 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.784 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:07.784 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:07.784 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.784 22:13:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:07.784 [2024-07-15 22:13:54.626759] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:19:07.784 [2024-07-15 22:13:54.626856] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90514 ] 00:19:08.043 [2024-07-15 22:13:54.766687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.043 [2024-07-15 22:13:54.826330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.977 22:13:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.977 22:13:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:19:08.977 22:13:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:08.977 22:13:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:08.977 22:13:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.977 22:13:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:08.977 22:13:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.977 22:13:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:08.977 22:13:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.977 22:13:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:08.977 22:13:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.977 22:13:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:08.977 22:13:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.977 22:13:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:09.912 [2024-07-15 22:13:56.694688] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:09.912 [2024-07-15 22:13:56.694739] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:09.912 [2024-07-15 22:13:56.694760] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:09.912 [2024-07-15 22:13:56.780841] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:09.912 [2024-07-15 22:13:56.837764] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:09.912 [2024-07-15 22:13:56.837840] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:09.912 [2024-07-15 22:13:56.837870] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:09.912 [2024-07-15 22:13:56.837889] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:09.912 [2024-07-15 22:13:56.837916] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:09.912 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.912 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:09.912 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:09.912 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:09.912 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.912 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:09.912 [2024-07-15 22:13:56.843133] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19c9650 was disconnected and freed. delete nvme_qpair. 00:19:09.912 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:09.912 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:09.912 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:10.171 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.171 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:10.171 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:19:10.171 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:10.171 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:10.171 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:10.171 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:10.171 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:10.171 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.171 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:10.172 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:10.172 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:10.172 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.172 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:10.172 22:13:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:11.107 22:13:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:11.107 22:13:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:11.107 22:13:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:11.107 22:13:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:11.107 22:13:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.107 22:13:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:11.107 22:13:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:11.107 22:13:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.107 22:13:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:11.107 22:13:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:12.482 22:13:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:12.482 22:13:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.482 22:13:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.482 22:13:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:12.482 22:13:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:12.482 22:13:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:12.482 22:13:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:12.482 22:13:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.482 22:13:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:12.482 22:13:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:13.415 22:14:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:13.415 22:14:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:13.415 22:14:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:13.415 22:14:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:13.415 22:14:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:13.415 22:14:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.415 22:14:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:13.415 22:14:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.415 22:14:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:13.415 22:14:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:14.379 22:14:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:14.379 22:14:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:14.379 22:14:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:14.379 22:14:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:14.379 22:14:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.379 22:14:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:14.379 22:14:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:14.379 22:14:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.379 22:14:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:14.379 22:14:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:15.313 22:14:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:15.313 22:14:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:15.313 22:14:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.313 22:14:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:15.313 22:14:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:15.313 22:14:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:15.313 22:14:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:15.313 22:14:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.572 [2024-07-15 22:14:02.265795] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:15.572 [2024-07-15 22:14:02.265886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.572 [2024-07-15 22:14:02.265903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.572 [2024-07-15 22:14:02.265917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.572 [2024-07-15 22:14:02.265927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.572 [2024-07-15 22:14:02.265937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.572 [2024-07-15 22:14:02.265947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.572 [2024-07-15 22:14:02.265958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.572 [2024-07-15 22:14:02.265967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.572 [2024-07-15 22:14:02.265977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:15.572 [2024-07-15 22:14:02.265986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:15.572 [2024-07-15 22:14:02.265998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992900 is same with the state(5) to be set 00:19:15.572 [2024-07-15 22:14:02.275780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1992900 (9): Bad file descriptor 00:19:15.572 [2024-07-15 22:14:02.285829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:15.572 22:14:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:15.572 22:14:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:16.504 22:14:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:16.504 22:14:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:16.504 22:14:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:16.504 22:14:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:16.504 22:14:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:16.504 22:14:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.504 22:14:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:16.504 [2024-07-15 22:14:03.300170] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:16.504 [2024-07-15 22:14:03.300247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1992900 with addr=10.0.0.2, port=4420 00:19:16.504 [2024-07-15 22:14:03.300268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1992900 is same with the state(5) to be set 00:19:16.504 [2024-07-15 22:14:03.300310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1992900 (9): Bad file descriptor 00:19:16.504 [2024-07-15 22:14:03.301014] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:16.504 [2024-07-15 22:14:03.301074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:16.504 [2024-07-15 22:14:03.301120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:16.504 [2024-07-15 22:14:03.301140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:16.504 [2024-07-15 22:14:03.301175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:16.504 [2024-07-15 22:14:03.301194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:16.504 22:14:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.504 22:14:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:16.504 22:14:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:17.438 [2024-07-15 22:14:04.301258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:17.438 [2024-07-15 22:14:04.301339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:17.438 [2024-07-15 22:14:04.301355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:17.438 [2024-07-15 22:14:04.301371] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:19:17.438 [2024-07-15 22:14:04.301403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:17.438 [2024-07-15 22:14:04.301443] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:19:17.438 [2024-07-15 22:14:04.301514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.438 [2024-07-15 22:14:04.301533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.438 [2024-07-15 22:14:04.301553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.438 [2024-07-15 22:14:04.301567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.438 [2024-07-15 22:14:04.301583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.438 [2024-07-15 22:14:04.301597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.438 [2024-07-15 22:14:04.301612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.438 [2024-07-15 22:14:04.301626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.438 [2024-07-15 22:14:04.301641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.438 [2024-07-15 22:14:04.301655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.438 [2024-07-15 22:14:04.301669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:17.438 [2024-07-15 22:14:04.301720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19353e0 (9): Bad file descriptor 00:19:17.438 [2024-07-15 22:14:04.302711] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:17.439 [2024-07-15 22:14:04.302742] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:19:17.439 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:17.439 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:17.439 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:17.439 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.439 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:17.439 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:17.439 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:17.439 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.697 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:17.697 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:17.697 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:17.697 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:17.697 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:17.697 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:17.697 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.697 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:17.697 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:17.697 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:17.697 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:17.697 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.697 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:17.697 22:14:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:18.664 22:14:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:18.664 22:14:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:18.664 22:14:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:18.664 22:14:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:18.664 22:14:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:18.664 22:14:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.664 22:14:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:18.664 22:14:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.664 22:14:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:18.664 22:14:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:19.606 [2024-07-15 22:14:06.306416] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:19.606 [2024-07-15 22:14:06.306454] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:19.606 [2024-07-15 22:14:06.306475] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:19.606 [2024-07-15 22:14:06.392545] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:19:19.606 [2024-07-15 22:14:06.448759] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:19.606 [2024-07-15 22:14:06.448835] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:19.606 [2024-07-15 22:14:06.448861] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:19.606 [2024-07-15 22:14:06.448880] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:19:19.606 [2024-07-15 22:14:06.448891] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:19.606 [2024-07-15 22:14:06.454996] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19ae300 was disconnected and freed. delete nvme_qpair. 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90514 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90514 ']' 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90514 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90514 00:19:19.864 killing process with pid 90514 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90514' 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90514 00:19:19.864 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90514 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:20.122 rmmod nvme_tcp 00:19:20.122 rmmod nvme_fabrics 00:19:20.122 rmmod nvme_keyring 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 90478 ']' 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 90478 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90478 ']' 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90478 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90478 00:19:20.122 killing process with pid 90478 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90478' 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90478 00:19:20.122 22:14:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90478 00:19:20.381 22:14:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:20.381 22:14:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:20.381 22:14:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:20.381 22:14:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:20.381 22:14:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:20.381 22:14:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.381 22:14:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.381 22:14:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.381 22:14:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:20.381 00:19:20.381 real 0m13.485s 00:19:20.381 user 0m24.737s 00:19:20.381 sys 0m1.458s 00:19:20.381 22:14:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:20.381 22:14:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:20.381 ************************************ 00:19:20.381 END TEST nvmf_discovery_remove_ifc 00:19:20.381 ************************************ 00:19:20.381 22:14:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:20.381 22:14:07 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:20.381 22:14:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:20.381 22:14:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.381 22:14:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:20.382 ************************************ 00:19:20.382 START TEST nvmf_identify_kernel_target 00:19:20.382 ************************************ 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:20.382 * Looking for test storage... 00:19:20.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:20.382 Cannot find device "nvmf_tgt_br" 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:20.382 Cannot find device "nvmf_tgt_br2" 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:19:20.382 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:20.641 Cannot find device "nvmf_tgt_br" 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:20.641 Cannot find device "nvmf_tgt_br2" 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:20.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:20.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:20.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:19:20.641 00:19:20.641 --- 10.0.0.2 ping statistics --- 00:19:20.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.641 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:20.641 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:20.641 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:19:20.641 00:19:20.641 --- 10.0.0.3 ping statistics --- 00:19:20.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.641 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:20.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:19:20.641 00:19:20.641 --- 10.0.0.1 ping statistics --- 00:19:20.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.641 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:20.641 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:20.900 22:14:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:21.158 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:21.158 Waiting for block devices as requested 00:19:21.158 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:21.416 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:21.416 No valid GPT data, bailing 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:21.416 No valid GPT data, bailing 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:21.416 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:21.675 No valid GPT data, bailing 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:21.675 No valid GPT data, bailing 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -a 10.0.0.1 -t tcp -s 4420 00:19:21.675 00:19:21.675 Discovery Log Number of Records 2, Generation counter 2 00:19:21.675 =====Discovery Log Entry 0====== 00:19:21.675 trtype: tcp 00:19:21.675 adrfam: ipv4 00:19:21.675 subtype: current discovery subsystem 00:19:21.675 treq: not specified, sq flow control disable supported 00:19:21.675 portid: 1 00:19:21.675 trsvcid: 4420 00:19:21.675 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:21.675 traddr: 10.0.0.1 00:19:21.675 eflags: none 00:19:21.675 sectype: none 00:19:21.675 =====Discovery Log Entry 1====== 00:19:21.675 trtype: tcp 00:19:21.675 adrfam: ipv4 00:19:21.675 subtype: nvme subsystem 00:19:21.675 treq: not specified, sq flow control disable supported 00:19:21.675 portid: 1 00:19:21.675 trsvcid: 4420 00:19:21.675 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:21.675 traddr: 10.0.0.1 00:19:21.675 eflags: none 00:19:21.675 sectype: none 00:19:21.675 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:21.675 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:21.934 ===================================================== 00:19:21.934 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:21.934 ===================================================== 00:19:21.934 Controller Capabilities/Features 00:19:21.934 ================================ 00:19:21.934 Vendor ID: 0000 00:19:21.934 Subsystem Vendor ID: 0000 00:19:21.934 Serial Number: 99165431632c37487a5a 00:19:21.934 Model Number: Linux 00:19:21.934 Firmware Version: 6.7.0-68 00:19:21.934 Recommended Arb Burst: 0 00:19:21.934 IEEE OUI Identifier: 00 00 00 00:19:21.934 Multi-path I/O 00:19:21.934 May have multiple subsystem ports: No 00:19:21.934 May have multiple controllers: No 00:19:21.934 Associated with SR-IOV VF: No 00:19:21.934 Max Data Transfer Size: Unlimited 00:19:21.934 Max Number of Namespaces: 0 00:19:21.934 Max Number of I/O Queues: 1024 00:19:21.934 NVMe Specification Version (VS): 1.3 00:19:21.934 NVMe Specification Version (Identify): 1.3 00:19:21.934 Maximum Queue Entries: 1024 00:19:21.934 Contiguous Queues Required: No 00:19:21.934 Arbitration Mechanisms Supported 00:19:21.934 Weighted Round Robin: Not Supported 00:19:21.934 Vendor Specific: Not Supported 00:19:21.934 Reset Timeout: 7500 ms 00:19:21.934 Doorbell Stride: 4 bytes 00:19:21.934 NVM Subsystem Reset: Not Supported 00:19:21.934 Command Sets Supported 00:19:21.934 NVM Command Set: Supported 00:19:21.934 Boot Partition: Not Supported 00:19:21.934 Memory Page Size Minimum: 4096 bytes 00:19:21.934 Memory Page Size Maximum: 4096 bytes 00:19:21.934 Persistent Memory Region: Not Supported 00:19:21.934 Optional Asynchronous Events Supported 00:19:21.934 Namespace Attribute Notices: Not Supported 00:19:21.934 Firmware Activation Notices: Not Supported 00:19:21.934 ANA Change Notices: Not Supported 00:19:21.934 PLE Aggregate Log Change Notices: Not Supported 00:19:21.934 LBA Status Info Alert Notices: Not Supported 00:19:21.934 EGE Aggregate Log Change Notices: Not Supported 00:19:21.934 Normal NVM Subsystem Shutdown event: Not Supported 00:19:21.934 Zone Descriptor Change Notices: Not Supported 00:19:21.934 Discovery Log Change Notices: Supported 00:19:21.934 Controller Attributes 00:19:21.934 128-bit Host Identifier: Not Supported 00:19:21.934 Non-Operational Permissive Mode: Not Supported 00:19:21.934 NVM Sets: Not Supported 00:19:21.934 Read Recovery Levels: Not Supported 00:19:21.934 Endurance Groups: Not Supported 00:19:21.934 Predictable Latency Mode: Not Supported 00:19:21.934 Traffic Based Keep ALive: Not Supported 00:19:21.934 Namespace Granularity: Not Supported 00:19:21.934 SQ Associations: Not Supported 00:19:21.935 UUID List: Not Supported 00:19:21.935 Multi-Domain Subsystem: Not Supported 00:19:21.935 Fixed Capacity Management: Not Supported 00:19:21.935 Variable Capacity Management: Not Supported 00:19:21.935 Delete Endurance Group: Not Supported 00:19:21.935 Delete NVM Set: Not Supported 00:19:21.935 Extended LBA Formats Supported: Not Supported 00:19:21.935 Flexible Data Placement Supported: Not Supported 00:19:21.935 00:19:21.935 Controller Memory Buffer Support 00:19:21.935 ================================ 00:19:21.935 Supported: No 00:19:21.935 00:19:21.935 Persistent Memory Region Support 00:19:21.935 ================================ 00:19:21.935 Supported: No 00:19:21.935 00:19:21.935 Admin Command Set Attributes 00:19:21.935 ============================ 00:19:21.935 Security Send/Receive: Not Supported 00:19:21.935 Format NVM: Not Supported 00:19:21.935 Firmware Activate/Download: Not Supported 00:19:21.935 Namespace Management: Not Supported 00:19:21.935 Device Self-Test: Not Supported 00:19:21.935 Directives: Not Supported 00:19:21.935 NVMe-MI: Not Supported 00:19:21.935 Virtualization Management: Not Supported 00:19:21.935 Doorbell Buffer Config: Not Supported 00:19:21.935 Get LBA Status Capability: Not Supported 00:19:21.935 Command & Feature Lockdown Capability: Not Supported 00:19:21.935 Abort Command Limit: 1 00:19:21.935 Async Event Request Limit: 1 00:19:21.935 Number of Firmware Slots: N/A 00:19:21.935 Firmware Slot 1 Read-Only: N/A 00:19:21.935 Firmware Activation Without Reset: N/A 00:19:21.935 Multiple Update Detection Support: N/A 00:19:21.935 Firmware Update Granularity: No Information Provided 00:19:21.935 Per-Namespace SMART Log: No 00:19:21.935 Asymmetric Namespace Access Log Page: Not Supported 00:19:21.935 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:21.935 Command Effects Log Page: Not Supported 00:19:21.935 Get Log Page Extended Data: Supported 00:19:21.935 Telemetry Log Pages: Not Supported 00:19:21.935 Persistent Event Log Pages: Not Supported 00:19:21.935 Supported Log Pages Log Page: May Support 00:19:21.935 Commands Supported & Effects Log Page: Not Supported 00:19:21.935 Feature Identifiers & Effects Log Page:May Support 00:19:21.935 NVMe-MI Commands & Effects Log Page: May Support 00:19:21.935 Data Area 4 for Telemetry Log: Not Supported 00:19:21.935 Error Log Page Entries Supported: 1 00:19:21.935 Keep Alive: Not Supported 00:19:21.935 00:19:21.935 NVM Command Set Attributes 00:19:21.935 ========================== 00:19:21.935 Submission Queue Entry Size 00:19:21.935 Max: 1 00:19:21.935 Min: 1 00:19:21.935 Completion Queue Entry Size 00:19:21.935 Max: 1 00:19:21.935 Min: 1 00:19:21.935 Number of Namespaces: 0 00:19:21.935 Compare Command: Not Supported 00:19:21.935 Write Uncorrectable Command: Not Supported 00:19:21.935 Dataset Management Command: Not Supported 00:19:21.935 Write Zeroes Command: Not Supported 00:19:21.935 Set Features Save Field: Not Supported 00:19:21.935 Reservations: Not Supported 00:19:21.935 Timestamp: Not Supported 00:19:21.935 Copy: Not Supported 00:19:21.935 Volatile Write Cache: Not Present 00:19:21.935 Atomic Write Unit (Normal): 1 00:19:21.935 Atomic Write Unit (PFail): 1 00:19:21.935 Atomic Compare & Write Unit: 1 00:19:21.935 Fused Compare & Write: Not Supported 00:19:21.935 Scatter-Gather List 00:19:21.935 SGL Command Set: Supported 00:19:21.935 SGL Keyed: Not Supported 00:19:21.935 SGL Bit Bucket Descriptor: Not Supported 00:19:21.935 SGL Metadata Pointer: Not Supported 00:19:21.935 Oversized SGL: Not Supported 00:19:21.935 SGL Metadata Address: Not Supported 00:19:21.935 SGL Offset: Supported 00:19:21.935 Transport SGL Data Block: Not Supported 00:19:21.935 Replay Protected Memory Block: Not Supported 00:19:21.935 00:19:21.935 Firmware Slot Information 00:19:21.935 ========================= 00:19:21.935 Active slot: 0 00:19:21.935 00:19:21.935 00:19:21.935 Error Log 00:19:21.935 ========= 00:19:21.935 00:19:21.935 Active Namespaces 00:19:21.935 ================= 00:19:21.935 Discovery Log Page 00:19:21.935 ================== 00:19:21.935 Generation Counter: 2 00:19:21.935 Number of Records: 2 00:19:21.935 Record Format: 0 00:19:21.935 00:19:21.935 Discovery Log Entry 0 00:19:21.935 ---------------------- 00:19:21.935 Transport Type: 3 (TCP) 00:19:21.935 Address Family: 1 (IPv4) 00:19:21.935 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:21.935 Entry Flags: 00:19:21.935 Duplicate Returned Information: 0 00:19:21.935 Explicit Persistent Connection Support for Discovery: 0 00:19:21.935 Transport Requirements: 00:19:21.935 Secure Channel: Not Specified 00:19:21.935 Port ID: 1 (0x0001) 00:19:21.935 Controller ID: 65535 (0xffff) 00:19:21.935 Admin Max SQ Size: 32 00:19:21.935 Transport Service Identifier: 4420 00:19:21.935 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:21.935 Transport Address: 10.0.0.1 00:19:21.935 Discovery Log Entry 1 00:19:21.935 ---------------------- 00:19:21.935 Transport Type: 3 (TCP) 00:19:21.935 Address Family: 1 (IPv4) 00:19:21.935 Subsystem Type: 2 (NVM Subsystem) 00:19:21.935 Entry Flags: 00:19:21.935 Duplicate Returned Information: 0 00:19:21.935 Explicit Persistent Connection Support for Discovery: 0 00:19:21.935 Transport Requirements: 00:19:21.935 Secure Channel: Not Specified 00:19:21.935 Port ID: 1 (0x0001) 00:19:21.935 Controller ID: 65535 (0xffff) 00:19:21.935 Admin Max SQ Size: 32 00:19:21.935 Transport Service Identifier: 4420 00:19:21.935 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:21.935 Transport Address: 10.0.0.1 00:19:21.935 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:21.935 get_feature(0x01) failed 00:19:21.935 get_feature(0x02) failed 00:19:21.935 get_feature(0x04) failed 00:19:21.935 ===================================================== 00:19:21.935 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:21.935 ===================================================== 00:19:21.935 Controller Capabilities/Features 00:19:21.935 ================================ 00:19:21.935 Vendor ID: 0000 00:19:21.935 Subsystem Vendor ID: 0000 00:19:21.935 Serial Number: 895f6484c3ab0229ea01 00:19:21.935 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:21.935 Firmware Version: 6.7.0-68 00:19:21.935 Recommended Arb Burst: 6 00:19:21.935 IEEE OUI Identifier: 00 00 00 00:19:21.935 Multi-path I/O 00:19:21.935 May have multiple subsystem ports: Yes 00:19:21.935 May have multiple controllers: Yes 00:19:21.935 Associated with SR-IOV VF: No 00:19:21.936 Max Data Transfer Size: Unlimited 00:19:21.936 Max Number of Namespaces: 1024 00:19:21.936 Max Number of I/O Queues: 128 00:19:21.936 NVMe Specification Version (VS): 1.3 00:19:21.936 NVMe Specification Version (Identify): 1.3 00:19:21.936 Maximum Queue Entries: 1024 00:19:21.936 Contiguous Queues Required: No 00:19:21.936 Arbitration Mechanisms Supported 00:19:21.936 Weighted Round Robin: Not Supported 00:19:21.936 Vendor Specific: Not Supported 00:19:21.936 Reset Timeout: 7500 ms 00:19:21.936 Doorbell Stride: 4 bytes 00:19:21.936 NVM Subsystem Reset: Not Supported 00:19:21.936 Command Sets Supported 00:19:21.936 NVM Command Set: Supported 00:19:21.936 Boot Partition: Not Supported 00:19:21.936 Memory Page Size Minimum: 4096 bytes 00:19:21.936 Memory Page Size Maximum: 4096 bytes 00:19:21.936 Persistent Memory Region: Not Supported 00:19:21.936 Optional Asynchronous Events Supported 00:19:21.936 Namespace Attribute Notices: Supported 00:19:21.936 Firmware Activation Notices: Not Supported 00:19:21.936 ANA Change Notices: Supported 00:19:21.936 PLE Aggregate Log Change Notices: Not Supported 00:19:21.936 LBA Status Info Alert Notices: Not Supported 00:19:21.936 EGE Aggregate Log Change Notices: Not Supported 00:19:21.936 Normal NVM Subsystem Shutdown event: Not Supported 00:19:21.936 Zone Descriptor Change Notices: Not Supported 00:19:21.936 Discovery Log Change Notices: Not Supported 00:19:21.936 Controller Attributes 00:19:21.936 128-bit Host Identifier: Supported 00:19:21.936 Non-Operational Permissive Mode: Not Supported 00:19:21.936 NVM Sets: Not Supported 00:19:21.936 Read Recovery Levels: Not Supported 00:19:21.936 Endurance Groups: Not Supported 00:19:21.936 Predictable Latency Mode: Not Supported 00:19:21.936 Traffic Based Keep ALive: Supported 00:19:21.936 Namespace Granularity: Not Supported 00:19:21.936 SQ Associations: Not Supported 00:19:21.936 UUID List: Not Supported 00:19:21.936 Multi-Domain Subsystem: Not Supported 00:19:21.936 Fixed Capacity Management: Not Supported 00:19:21.936 Variable Capacity Management: Not Supported 00:19:21.936 Delete Endurance Group: Not Supported 00:19:21.936 Delete NVM Set: Not Supported 00:19:21.936 Extended LBA Formats Supported: Not Supported 00:19:21.936 Flexible Data Placement Supported: Not Supported 00:19:21.936 00:19:21.936 Controller Memory Buffer Support 00:19:21.936 ================================ 00:19:21.936 Supported: No 00:19:21.936 00:19:21.936 Persistent Memory Region Support 00:19:21.936 ================================ 00:19:21.936 Supported: No 00:19:21.936 00:19:21.936 Admin Command Set Attributes 00:19:21.936 ============================ 00:19:21.936 Security Send/Receive: Not Supported 00:19:21.936 Format NVM: Not Supported 00:19:21.936 Firmware Activate/Download: Not Supported 00:19:21.936 Namespace Management: Not Supported 00:19:21.936 Device Self-Test: Not Supported 00:19:21.936 Directives: Not Supported 00:19:21.936 NVMe-MI: Not Supported 00:19:21.936 Virtualization Management: Not Supported 00:19:21.936 Doorbell Buffer Config: Not Supported 00:19:21.936 Get LBA Status Capability: Not Supported 00:19:21.936 Command & Feature Lockdown Capability: Not Supported 00:19:21.936 Abort Command Limit: 4 00:19:21.936 Async Event Request Limit: 4 00:19:21.936 Number of Firmware Slots: N/A 00:19:21.936 Firmware Slot 1 Read-Only: N/A 00:19:21.936 Firmware Activation Without Reset: N/A 00:19:21.936 Multiple Update Detection Support: N/A 00:19:21.936 Firmware Update Granularity: No Information Provided 00:19:21.936 Per-Namespace SMART Log: Yes 00:19:21.936 Asymmetric Namespace Access Log Page: Supported 00:19:21.936 ANA Transition Time : 10 sec 00:19:21.936 00:19:21.936 Asymmetric Namespace Access Capabilities 00:19:21.936 ANA Optimized State : Supported 00:19:21.936 ANA Non-Optimized State : Supported 00:19:21.936 ANA Inaccessible State : Supported 00:19:21.936 ANA Persistent Loss State : Supported 00:19:21.936 ANA Change State : Supported 00:19:21.936 ANAGRPID is not changed : No 00:19:21.936 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:21.936 00:19:21.936 ANA Group Identifier Maximum : 128 00:19:21.936 Number of ANA Group Identifiers : 128 00:19:21.936 Max Number of Allowed Namespaces : 1024 00:19:21.936 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:21.936 Command Effects Log Page: Supported 00:19:21.936 Get Log Page Extended Data: Supported 00:19:21.936 Telemetry Log Pages: Not Supported 00:19:21.936 Persistent Event Log Pages: Not Supported 00:19:21.936 Supported Log Pages Log Page: May Support 00:19:21.936 Commands Supported & Effects Log Page: Not Supported 00:19:21.936 Feature Identifiers & Effects Log Page:May Support 00:19:21.936 NVMe-MI Commands & Effects Log Page: May Support 00:19:21.936 Data Area 4 for Telemetry Log: Not Supported 00:19:21.936 Error Log Page Entries Supported: 128 00:19:21.936 Keep Alive: Supported 00:19:21.936 Keep Alive Granularity: 1000 ms 00:19:21.936 00:19:21.936 NVM Command Set Attributes 00:19:21.936 ========================== 00:19:21.936 Submission Queue Entry Size 00:19:21.936 Max: 64 00:19:21.936 Min: 64 00:19:21.936 Completion Queue Entry Size 00:19:21.936 Max: 16 00:19:21.936 Min: 16 00:19:21.936 Number of Namespaces: 1024 00:19:21.936 Compare Command: Not Supported 00:19:21.936 Write Uncorrectable Command: Not Supported 00:19:21.936 Dataset Management Command: Supported 00:19:21.936 Write Zeroes Command: Supported 00:19:21.936 Set Features Save Field: Not Supported 00:19:21.936 Reservations: Not Supported 00:19:21.936 Timestamp: Not Supported 00:19:21.936 Copy: Not Supported 00:19:21.936 Volatile Write Cache: Present 00:19:21.936 Atomic Write Unit (Normal): 1 00:19:21.936 Atomic Write Unit (PFail): 1 00:19:21.936 Atomic Compare & Write Unit: 1 00:19:21.936 Fused Compare & Write: Not Supported 00:19:21.936 Scatter-Gather List 00:19:21.936 SGL Command Set: Supported 00:19:21.936 SGL Keyed: Not Supported 00:19:21.936 SGL Bit Bucket Descriptor: Not Supported 00:19:21.936 SGL Metadata Pointer: Not Supported 00:19:21.936 Oversized SGL: Not Supported 00:19:21.936 SGL Metadata Address: Not Supported 00:19:21.936 SGL Offset: Supported 00:19:21.936 Transport SGL Data Block: Not Supported 00:19:21.936 Replay Protected Memory Block: Not Supported 00:19:21.936 00:19:21.936 Firmware Slot Information 00:19:21.936 ========================= 00:19:21.936 Active slot: 0 00:19:21.936 00:19:21.937 Asymmetric Namespace Access 00:19:21.937 =========================== 00:19:21.937 Change Count : 0 00:19:21.937 Number of ANA Group Descriptors : 1 00:19:21.937 ANA Group Descriptor : 0 00:19:21.937 ANA Group ID : 1 00:19:21.937 Number of NSID Values : 1 00:19:21.937 Change Count : 0 00:19:21.937 ANA State : 1 00:19:21.937 Namespace Identifier : 1 00:19:21.937 00:19:21.937 Commands Supported and Effects 00:19:21.937 ============================== 00:19:21.937 Admin Commands 00:19:21.937 -------------- 00:19:21.937 Get Log Page (02h): Supported 00:19:21.937 Identify (06h): Supported 00:19:21.937 Abort (08h): Supported 00:19:21.937 Set Features (09h): Supported 00:19:21.937 Get Features (0Ah): Supported 00:19:21.937 Asynchronous Event Request (0Ch): Supported 00:19:21.937 Keep Alive (18h): Supported 00:19:21.937 I/O Commands 00:19:21.937 ------------ 00:19:21.937 Flush (00h): Supported 00:19:21.937 Write (01h): Supported LBA-Change 00:19:21.937 Read (02h): Supported 00:19:21.937 Write Zeroes (08h): Supported LBA-Change 00:19:21.937 Dataset Management (09h): Supported 00:19:21.937 00:19:21.937 Error Log 00:19:21.937 ========= 00:19:21.937 Entry: 0 00:19:21.937 Error Count: 0x3 00:19:21.937 Submission Queue Id: 0x0 00:19:21.937 Command Id: 0x5 00:19:21.937 Phase Bit: 0 00:19:21.937 Status Code: 0x2 00:19:21.937 Status Code Type: 0x0 00:19:21.937 Do Not Retry: 1 00:19:21.937 Error Location: 0x28 00:19:21.937 LBA: 0x0 00:19:21.937 Namespace: 0x0 00:19:21.937 Vendor Log Page: 0x0 00:19:21.937 ----------- 00:19:21.937 Entry: 1 00:19:21.937 Error Count: 0x2 00:19:21.937 Submission Queue Id: 0x0 00:19:21.937 Command Id: 0x5 00:19:21.937 Phase Bit: 0 00:19:21.937 Status Code: 0x2 00:19:21.937 Status Code Type: 0x0 00:19:21.937 Do Not Retry: 1 00:19:21.937 Error Location: 0x28 00:19:21.937 LBA: 0x0 00:19:21.937 Namespace: 0x0 00:19:21.937 Vendor Log Page: 0x0 00:19:21.937 ----------- 00:19:21.937 Entry: 2 00:19:21.937 Error Count: 0x1 00:19:21.937 Submission Queue Id: 0x0 00:19:21.937 Command Id: 0x4 00:19:21.937 Phase Bit: 0 00:19:21.937 Status Code: 0x2 00:19:21.937 Status Code Type: 0x0 00:19:21.937 Do Not Retry: 1 00:19:21.937 Error Location: 0x28 00:19:21.937 LBA: 0x0 00:19:21.937 Namespace: 0x0 00:19:21.937 Vendor Log Page: 0x0 00:19:21.937 00:19:21.937 Number of Queues 00:19:21.937 ================ 00:19:21.937 Number of I/O Submission Queues: 128 00:19:21.937 Number of I/O Completion Queues: 128 00:19:21.937 00:19:21.937 ZNS Specific Controller Data 00:19:21.937 ============================ 00:19:21.937 Zone Append Size Limit: 0 00:19:21.937 00:19:21.937 00:19:21.937 Active Namespaces 00:19:21.937 ================= 00:19:21.937 get_feature(0x05) failed 00:19:21.937 Namespace ID:1 00:19:21.937 Command Set Identifier: NVM (00h) 00:19:21.937 Deallocate: Supported 00:19:21.937 Deallocated/Unwritten Error: Not Supported 00:19:21.937 Deallocated Read Value: Unknown 00:19:21.937 Deallocate in Write Zeroes: Not Supported 00:19:21.937 Deallocated Guard Field: 0xFFFF 00:19:21.937 Flush: Supported 00:19:21.937 Reservation: Not Supported 00:19:21.937 Namespace Sharing Capabilities: Multiple Controllers 00:19:21.937 Size (in LBAs): 1310720 (5GiB) 00:19:21.937 Capacity (in LBAs): 1310720 (5GiB) 00:19:21.937 Utilization (in LBAs): 1310720 (5GiB) 00:19:21.937 UUID: e469963d-656e-4bd7-956a-d4e16a207697 00:19:21.937 Thin Provisioning: Not Supported 00:19:21.937 Per-NS Atomic Units: Yes 00:19:21.937 Atomic Boundary Size (Normal): 0 00:19:21.937 Atomic Boundary Size (PFail): 0 00:19:21.937 Atomic Boundary Offset: 0 00:19:21.937 NGUID/EUI64 Never Reused: No 00:19:21.937 ANA group ID: 1 00:19:21.937 Namespace Write Protected: No 00:19:21.937 Number of LBA Formats: 1 00:19:21.937 Current LBA Format: LBA Format #00 00:19:21.937 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:21.937 00:19:21.937 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:21.937 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:21.937 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:22.196 rmmod nvme_tcp 00:19:22.196 rmmod nvme_fabrics 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:22.196 22:14:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:22.196 22:14:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:22.196 22:14:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:22.196 22:14:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:22.196 22:14:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:22.196 22:14:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:22.760 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:23.017 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:23.017 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:23.017 00:19:23.017 real 0m2.650s 00:19:23.017 user 0m0.905s 00:19:23.017 sys 0m1.295s 00:19:23.017 22:14:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:23.017 22:14:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.017 ************************************ 00:19:23.017 END TEST nvmf_identify_kernel_target 00:19:23.017 ************************************ 00:19:23.017 22:14:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:23.017 22:14:09 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:23.017 22:14:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:23.017 22:14:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:23.017 22:14:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:23.017 ************************************ 00:19:23.017 START TEST nvmf_auth_host 00:19:23.017 ************************************ 00:19:23.017 22:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:23.017 * Looking for test storage... 00:19:23.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:23.017 22:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:23.017 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:23.017 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.017 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.017 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.017 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.017 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.017 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.017 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.017 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.017 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.017 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:23.275 22:14:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:23.275 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:23.276 Cannot find device "nvmf_tgt_br" 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:23.276 Cannot find device "nvmf_tgt_br2" 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:23.276 Cannot find device "nvmf_tgt_br" 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:23.276 Cannot find device "nvmf_tgt_br2" 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:23.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:23.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:23.276 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:23.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:19:23.533 00:19:23.533 --- 10.0.0.2 ping statistics --- 00:19:23.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.533 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:23.533 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:23.533 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:19:23.533 00:19:23.533 --- 10.0.0.3 ping statistics --- 00:19:23.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.533 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:23.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:19:23.533 00:19:23.533 --- 10.0.0.1 ping statistics --- 00:19:23.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.533 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=91408 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 91408 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91408 ']' 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.533 22:14:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=97ac1fc416c2c1566009b68bf66a37a9 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OO8 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 97ac1fc416c2c1566009b68bf66a37a9 0 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 97ac1fc416c2c1566009b68bf66a37a9 0 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=97ac1fc416c2c1566009b68bf66a37a9 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:24.493 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OO8 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OO8 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.OO8 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ed8e2ff2ac5a0292d318b9989639f11bbf175cc74bdb4ea2b6aebad3e2062c4a 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Dja 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ed8e2ff2ac5a0292d318b9989639f11bbf175cc74bdb4ea2b6aebad3e2062c4a 3 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ed8e2ff2ac5a0292d318b9989639f11bbf175cc74bdb4ea2b6aebad3e2062c4a 3 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ed8e2ff2ac5a0292d318b9989639f11bbf175cc74bdb4ea2b6aebad3e2062c4a 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Dja 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Dja 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Dja 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9830bc9d93af1945e1bd032963cf16a69c29fb2a00221991 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.eCH 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9830bc9d93af1945e1bd032963cf16a69c29fb2a00221991 0 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9830bc9d93af1945e1bd032963cf16a69c29fb2a00221991 0 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9830bc9d93af1945e1bd032963cf16a69c29fb2a00221991 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.eCH 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.eCH 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.eCH 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3c41648500b0eabce8cd05f3820372fbee1f1937bcbaf95e 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.gQG 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3c41648500b0eabce8cd05f3820372fbee1f1937bcbaf95e 2 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3c41648500b0eabce8cd05f3820372fbee1f1937bcbaf95e 2 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3c41648500b0eabce8cd05f3820372fbee1f1937bcbaf95e 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.gQG 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.gQG 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.gQG 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cfe423c0938daa045cfe6b1855ff483b 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.NFB 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cfe423c0938daa045cfe6b1855ff483b 1 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cfe423c0938daa045cfe6b1855ff483b 1 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cfe423c0938daa045cfe6b1855ff483b 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:24.751 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.NFB 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.NFB 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.NFB 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0443e5a5ee307d8a9849b83bded6f802 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4fc 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0443e5a5ee307d8a9849b83bded6f802 1 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0443e5a5ee307d8a9849b83bded6f802 1 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0443e5a5ee307d8a9849b83bded6f802 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4fc 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4fc 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.4fc 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b2c0e987d5b67cdac984f8513a6b0b7816a719ba177aa84f 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:25.009 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JHc 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b2c0e987d5b67cdac984f8513a6b0b7816a719ba177aa84f 2 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b2c0e987d5b67cdac984f8513a6b0b7816a719ba177aa84f 2 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b2c0e987d5b67cdac984f8513a6b0b7816a719ba177aa84f 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JHc 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JHc 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.JHc 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=93af58911f3c6d92faf9a307c32890b4 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hvC 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 93af58911f3c6d92faf9a307c32890b4 0 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 93af58911f3c6d92faf9a307c32890b4 0 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=93af58911f3c6d92faf9a307c32890b4 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hvC 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hvC 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.hvC 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8bcd292eb87d475db76e5a6752a3462c272d191c32fa0bdd8aa6066f80127d99 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8Ut 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8bcd292eb87d475db76e5a6752a3462c272d191c32fa0bdd8aa6066f80127d99 3 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8bcd292eb87d475db76e5a6752a3462c272d191c32fa0bdd8aa6066f80127d99 3 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8bcd292eb87d475db76e5a6752a3462c272d191c32fa0bdd8aa6066f80127d99 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:25.010 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:25.268 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8Ut 00:19:25.268 22:14:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8Ut 00:19:25.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.268 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.8Ut 00:19:25.268 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:25.268 22:14:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91408 00:19:25.268 22:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91408 ']' 00:19:25.268 22:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.268 22:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.268 22:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.268 22:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.268 22:14:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.OO8 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Dja ]] 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Dja 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.eCH 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.gQG ]] 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gQG 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.NFB 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.4fc ]] 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4fc 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.526 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.JHc 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.hvC ]] 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.hvC 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.8Ut 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:25.527 22:14:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:25.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:25.784 Waiting for block devices as requested 00:19:25.784 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:26.042 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:26.608 No valid GPT data, bailing 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:26.608 No valid GPT data, bailing 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:26.608 No valid GPT data, bailing 00:19:26.608 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:26.867 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:26.867 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:26.867 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:26.867 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:26.867 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:26.867 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:26.867 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:26.868 No valid GPT data, bailing 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -a 10.0.0.1 -t tcp -s 4420 00:19:26.868 00:19:26.868 Discovery Log Number of Records 2, Generation counter 2 00:19:26.868 =====Discovery Log Entry 0====== 00:19:26.868 trtype: tcp 00:19:26.868 adrfam: ipv4 00:19:26.868 subtype: current discovery subsystem 00:19:26.868 treq: not specified, sq flow control disable supported 00:19:26.868 portid: 1 00:19:26.868 trsvcid: 4420 00:19:26.868 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:26.868 traddr: 10.0.0.1 00:19:26.868 eflags: none 00:19:26.868 sectype: none 00:19:26.868 =====Discovery Log Entry 1====== 00:19:26.868 trtype: tcp 00:19:26.868 adrfam: ipv4 00:19:26.868 subtype: nvme subsystem 00:19:26.868 treq: not specified, sq flow control disable supported 00:19:26.868 portid: 1 00:19:26.868 trsvcid: 4420 00:19:26.868 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:26.868 traddr: 10.0.0.1 00:19:26.868 eflags: none 00:19:26.868 sectype: none 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.868 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.126 nvme0n1 00:19:27.127 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.127 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.127 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.127 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.127 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.127 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.127 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.127 22:14:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.127 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.127 22:14:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.127 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.385 nvme0n1 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:27.385 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.386 nvme0n1 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.386 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.643 nvme0n1 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.643 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.644 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.644 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:27.644 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:27.644 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:27.644 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.644 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.644 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:27.644 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.644 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:27.644 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:27.644 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:27.644 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:27.644 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.644 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.900 nvme0n1 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.900 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.158 nvme0n1 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:28.158 22:14:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.415 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.673 nvme0n1 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.673 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.931 nvme0n1 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.931 nvme0n1 00:19:28.931 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.188 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.188 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.188 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.189 22:14:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.189 nvme0n1 00:19:29.189 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.189 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.189 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.189 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.189 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.189 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.447 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.447 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.447 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.447 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.448 nvme0n1 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:29.448 22:14:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.380 nvme0n1 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.380 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.638 nvme0n1 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.638 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.896 nvme0n1 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.896 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.154 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.154 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.154 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.154 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.154 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.154 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.154 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:31.154 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.154 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:31.154 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:31.154 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:31.154 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:31.154 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.155 22:14:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.155 nvme0n1 00:19:31.155 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.155 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.155 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.155 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.155 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.155 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:31.413 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.414 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.672 nvme0n1 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:31.672 22:14:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.570 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.829 nvme0n1 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.829 22:14:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.393 nvme0n1 00:19:34.393 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.393 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.393 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.393 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.393 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.393 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.393 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.393 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.393 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.393 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.393 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.393 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.393 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.394 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.652 nvme0n1 00:19:34.652 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.652 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.652 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.652 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.652 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.652 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.911 22:14:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.168 nvme0n1 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:35.168 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.169 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.734 nvme0n1 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.734 22:14:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.301 nvme0n1 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:36.301 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.558 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.123 nvme0n1 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.123 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.124 22:14:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.689 nvme0n1 00:19:37.689 22:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.689 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.689 22:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.689 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.689 22:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.689 22:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.947 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.948 22:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:37.948 22:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:37.948 22:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:37.948 22:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.948 22:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.948 22:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:37.948 22:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:37.948 22:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:37.948 22:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:37.948 22:14:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:37.948 22:14:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:37.948 22:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.948 22:14:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.513 nvme0n1 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.513 22:14:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.086 nvme0n1 00:19:39.087 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.087 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.087 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.087 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.087 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.384 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.384 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.384 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.384 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.384 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.384 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.384 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:39.384 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.384 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.384 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.385 nvme0n1 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.385 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.644 nvme0n1 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.644 nvme0n1 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.644 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.903 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.903 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.903 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.903 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.903 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.903 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.903 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.903 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:39.903 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.903 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.904 nvme0n1 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.904 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.162 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:40.163 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:40.163 22:14:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:40.163 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:40.163 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.163 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.163 nvme0n1 00:19:40.163 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.163 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.163 22:14:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.163 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.163 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.163 22:14:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.163 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.421 nvme0n1 00:19:40.421 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.421 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.421 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.421 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.421 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.422 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.680 nvme0n1 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:40.680 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.681 nvme0n1 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.681 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.939 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.940 nvme0n1 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.940 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.199 22:14:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.199 nvme0n1 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.199 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.200 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.458 nvme0n1 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.458 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.716 nvme0n1 00:19:41.716 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.716 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.716 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.716 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.716 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.716 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.716 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.716 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.716 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.716 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.716 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.716 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.716 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.717 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.977 nvme0n1 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.977 22:14:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.235 nvme0n1 00:19:42.235 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.235 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.235 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.235 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.235 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.235 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.235 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.235 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.235 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.235 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.494 nvme0n1 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.494 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.753 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.035 nvme0n1 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.035 22:14:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.601 nvme0n1 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:43.601 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.602 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.169 nvme0n1 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.169 22:14:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.427 nvme0n1 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.428 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.994 nvme0n1 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.994 22:14:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.928 nvme0n1 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:45.928 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.929 22:14:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.862 nvme0n1 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.862 22:14:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.428 nvme0n1 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.428 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.362 nvme0n1 00:19:48.362 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.362 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.362 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.362 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.362 22:14:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.362 22:14:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.362 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.928 nvme0n1 00:19:48.928 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.928 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.928 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.928 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.928 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.928 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.928 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.928 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.928 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.928 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.928 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.928 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:48.928 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.929 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.187 nvme0n1 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.187 22:14:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.187 nvme0n1 00:19:49.187 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.187 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.187 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.187 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.187 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.187 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.187 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.187 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.187 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.187 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.446 nvme0n1 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.446 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.704 nvme0n1 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.704 nvme0n1 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.704 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.962 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.962 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.962 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.962 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.962 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.962 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.962 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.963 nvme0n1 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.963 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.220 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.220 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:50.220 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:50.220 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:50.220 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.220 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.220 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:50.220 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.220 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:50.220 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:50.220 22:14:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:50.220 22:14:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.220 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.220 22:14:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.220 nvme0n1 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:50.220 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.221 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.221 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.479 nvme0n1 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.479 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.738 nvme0n1 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.738 nvme0n1 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.738 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.005 nvme0n1 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.005 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.269 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.269 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.269 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.269 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.269 22:14:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.269 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.269 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:51.269 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.269 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:51.269 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.269 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:51.269 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:51.269 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:51.269 22:14:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.269 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.527 nvme0n1 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.527 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.528 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:51.528 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:51.528 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:51.528 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.528 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.528 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:51.528 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.528 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:51.528 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:51.528 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:51.528 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.528 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.528 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.785 nvme0n1 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:51.785 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.786 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.044 nvme0n1 00:19:52.044 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.044 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.044 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.044 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.044 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.044 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.044 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.044 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.045 22:14:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.304 nvme0n1 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.304 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.871 nvme0n1 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.871 22:14:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.129 nvme0n1 00:19:53.129 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.129 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.129 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.129 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.129 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.129 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.387 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.645 nvme0n1 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.645 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.212 nvme0n1 00:19:54.212 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.212 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.212 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.212 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.212 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.212 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.212 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.212 22:14:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.212 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.212 22:14:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.212 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.471 nvme0n1 00:19:54.471 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.471 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.471 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.471 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.471 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.471 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTdhYzFmYzQxNmMyYzE1NjYwMDliNjhiZjY2YTM3YTmR0gjM: 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: ]] 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQ4ZTJmZjJhYzVhMDI5MmQzMThiOTk4OTYzOWYxMWJiZjE3NWNjNzRiZGI0ZWEyYjZhZWJhZDNlMjA2MmM0Ycq3s8I=: 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.729 22:14:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.295 nvme0n1 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:55.295 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.296 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.912 nvme0n1 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2ZlNDIzYzA5MzhkYWEwNDVjZmU2YjE4NTVmZjQ4M2K9Hq2O: 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: ]] 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ0M2U1YTVlZTMwN2Q4YTk4NDliODNiZGVkNmY4MDIKz0ka: 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:55.912 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.189 22:14:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.755 nvme0n1 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:56.755 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjJjMGU5ODdkNWI2N2NkYWM5ODRmODUxM2E2YjBiNzgxNmE3MTliYTE3N2FhODRmtmdaFA==: 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: ]] 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTNhZjU4OTExZjNjNmQ5MmZhZjlhMzA3YzMyODkwYjQ2mq4H: 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.756 22:14:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.689 nvme0n1 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGJjZDI5MmViODdkNDc1ZGI3NmU1YTY3NTJhMzQ2MmMyNzJkMTkxYzMyZmEwYmRkOGFhNjA2NmY4MDEyN2Q5OYOCFFU=: 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:57.689 22:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:57.690 22:14:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:57.690 22:14:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:57.690 22:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.690 22:14:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.255 nvme0n1 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTgzMGJjOWQ5M2FmMTk0NWUxYmQwMzI5NjNjZjE2YTY5YzI5ZmIyYTAwMjIxOTkx0hOlag==: 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: ]] 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2M0MTY0ODUwMGIwZWFiY2U4Y2QwNWYzODIwMzcyZmJlZTFmMTkzN2JjYmFmOTVlFRpEWA==: 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.255 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.256 2024/07/15 22:14:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:58.256 request: 00:19:58.256 { 00:19:58.256 "method": "bdev_nvme_attach_controller", 00:19:58.256 "params": { 00:19:58.256 "name": "nvme0", 00:19:58.256 "trtype": "tcp", 00:19:58.256 "traddr": "10.0.0.1", 00:19:58.256 "adrfam": "ipv4", 00:19:58.256 "trsvcid": "4420", 00:19:58.256 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:58.256 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:58.256 "prchk_reftag": false, 00:19:58.256 "prchk_guard": false, 00:19:58.256 "hdgst": false, 00:19:58.256 "ddgst": false 00:19:58.256 } 00:19:58.256 } 00:19:58.256 Got JSON-RPC error response 00:19:58.256 GoRPCClient: error on JSON-RPC call 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.256 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.514 2024/07/15 22:14:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:58.514 request: 00:19:58.514 { 00:19:58.514 "method": "bdev_nvme_attach_controller", 00:19:58.514 "params": { 00:19:58.514 "name": "nvme0", 00:19:58.514 "trtype": "tcp", 00:19:58.514 "traddr": "10.0.0.1", 00:19:58.514 "adrfam": "ipv4", 00:19:58.514 "trsvcid": "4420", 00:19:58.514 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:58.514 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:58.514 "prchk_reftag": false, 00:19:58.514 "prchk_guard": false, 00:19:58.514 "hdgst": false, 00:19:58.514 "ddgst": false, 00:19:58.514 "dhchap_key": "key2" 00:19:58.514 } 00:19:58.514 } 00:19:58.514 Got JSON-RPC error response 00:19:58.514 GoRPCClient: error on JSON-RPC call 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.514 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.515 2024/07/15 22:14:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:58.515 request: 00:19:58.515 { 00:19:58.515 "method": "bdev_nvme_attach_controller", 00:19:58.515 "params": { 00:19:58.515 "name": "nvme0", 00:19:58.515 "trtype": "tcp", 00:19:58.515 "traddr": "10.0.0.1", 00:19:58.515 "adrfam": "ipv4", 00:19:58.515 "trsvcid": "4420", 00:19:58.515 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:58.515 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:58.515 "prchk_reftag": false, 00:19:58.515 "prchk_guard": false, 00:19:58.515 "hdgst": false, 00:19:58.515 "ddgst": false, 00:19:58.515 "dhchap_key": "key1", 00:19:58.515 "dhchap_ctrlr_key": "ckey2" 00:19:58.515 } 00:19:58.515 } 00:19:58.515 Got JSON-RPC error response 00:19:58.515 GoRPCClient: error on JSON-RPC call 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:58.515 rmmod nvme_tcp 00:19:58.515 rmmod nvme_fabrics 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 91408 ']' 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 91408 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 91408 ']' 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 91408 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91408 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91408' 00:19:58.515 killing process with pid 91408 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 91408 00:19:58.515 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 91408 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:58.774 22:14:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:59.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:59.599 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:59.599 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:59.599 22:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.OO8 /tmp/spdk.key-null.eCH /tmp/spdk.key-sha256.NFB /tmp/spdk.key-sha384.JHc /tmp/spdk.key-sha512.8Ut /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:59.599 22:14:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:59.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:59.855 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:59.855 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:59.855 00:19:59.855 real 0m36.891s 00:19:59.855 user 0m32.490s 00:19:59.855 sys 0m3.522s 00:19:59.855 22:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:59.855 22:14:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.855 ************************************ 00:19:59.855 END TEST nvmf_auth_host 00:19:59.855 ************************************ 00:20:00.113 22:14:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:00.113 22:14:46 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:20:00.113 22:14:46 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:00.113 22:14:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:00.113 22:14:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.113 22:14:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:00.113 ************************************ 00:20:00.113 START TEST nvmf_digest 00:20:00.113 ************************************ 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:00.113 * Looking for test storage... 00:20:00.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:00.113 Cannot find device "nvmf_tgt_br" 00:20:00.113 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:20:00.114 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.114 Cannot find device "nvmf_tgt_br2" 00:20:00.114 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:20:00.114 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:00.114 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:00.114 Cannot find device "nvmf_tgt_br" 00:20:00.114 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:20:00.114 22:14:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:00.114 Cannot find device "nvmf_tgt_br2" 00:20:00.114 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:20:00.114 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:00.114 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:00.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:20:00.371 00:20:00.371 --- 10.0.0.2 ping statistics --- 00:20:00.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.371 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:00.371 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:00.371 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:20:00.371 00:20:00.371 --- 10.0.0.3 ping statistics --- 00:20:00.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.371 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:00.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:00.371 00:20:00.371 --- 10.0.0.1 ping statistics --- 00:20:00.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.371 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:00.371 ************************************ 00:20:00.371 START TEST nvmf_digest_clean 00:20:00.371 ************************************ 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=93005 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 93005 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93005 ']' 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.371 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.629 22:14:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:00.629 [2024-07-15 22:14:47.393808] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:20:00.629 [2024-07-15 22:14:47.393962] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.629 [2024-07-15 22:14:47.538038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.887 [2024-07-15 22:14:47.597063] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.887 [2024-07-15 22:14:47.597131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.887 [2024-07-15 22:14:47.597144] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.887 [2024-07-15 22:14:47.597152] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.887 [2024-07-15 22:14:47.597160] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.887 [2024-07-15 22:14:47.597188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.459 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.459 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:01.459 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.459 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.459 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:01.719 null0 00:20:01.719 [2024-07-15 22:14:48.481544] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.719 [2024-07-15 22:14:48.505647] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93055 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93055 /var/tmp/bperf.sock 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93055 ']' 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:01.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:01.719 22:14:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:01.719 [2024-07-15 22:14:48.569195] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:20:01.719 [2024-07-15 22:14:48.569298] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93055 ] 00:20:01.977 [2024-07-15 22:14:48.708127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.977 [2024-07-15 22:14:48.799797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.912 22:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.912 22:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:02.912 22:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:02.912 22:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:02.912 22:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:03.170 22:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:03.170 22:14:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:03.428 nvme0n1 00:20:03.428 22:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:03.428 22:14:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:03.687 Running I/O for 2 seconds... 00:20:05.583 00:20:05.583 Latency(us) 00:20:05.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.583 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:05.583 nvme0n1 : 2.01 18088.48 70.66 0.00 0.00 7065.87 3723.64 13166.78 00:20:05.583 =================================================================================================================== 00:20:05.583 Total : 18088.48 70.66 0.00 0.00 7065.87 3723.64 13166.78 00:20:05.583 0 00:20:05.583 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:05.583 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:05.583 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:05.583 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:05.583 | select(.opcode=="crc32c") 00:20:05.583 | "\(.module_name) \(.executed)"' 00:20:05.583 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93055 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93055 ']' 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93055 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93055 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93055' 00:20:05.840 killing process with pid 93055 00:20:05.840 Received shutdown signal, test time was about 2.000000 seconds 00:20:05.840 00:20:05.840 Latency(us) 00:20:05.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.840 =================================================================================================================== 00:20:05.840 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93055 00:20:05.840 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93055 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93145 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93145 /var/tmp/bperf.sock 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93145 ']' 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.097 22:14:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:06.097 Zero copy mechanism will not be used. 00:20:06.097 [2024-07-15 22:14:52.953953] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:20:06.097 [2024-07-15 22:14:52.954037] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93145 ] 00:20:06.355 [2024-07-15 22:14:53.086601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.355 [2024-07-15 22:14:53.146591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.300 22:14:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.300 22:14:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:07.300 22:14:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:07.300 22:14:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:07.300 22:14:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:07.558 22:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:07.558 22:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:07.817 nvme0n1 00:20:07.817 22:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:07.817 22:14:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:08.075 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:08.075 Zero copy mechanism will not be used. 00:20:08.075 Running I/O for 2 seconds... 00:20:09.977 00:20:09.978 Latency(us) 00:20:09.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.978 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:09.978 nvme0n1 : 2.00 6967.77 870.97 0.00 0.00 2292.04 636.74 8579.26 00:20:09.978 =================================================================================================================== 00:20:09.978 Total : 6967.77 870.97 0.00 0.00 2292.04 636.74 8579.26 00:20:09.978 0 00:20:09.978 22:14:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:09.978 22:14:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:09.978 22:14:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:09.978 22:14:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:09.978 22:14:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:09.978 | select(.opcode=="crc32c") 00:20:09.978 | "\(.module_name) \(.executed)"' 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93145 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93145 ']' 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93145 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93145 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:10.236 killing process with pid 93145 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93145' 00:20:10.236 Received shutdown signal, test time was about 2.000000 seconds 00:20:10.236 00:20:10.236 Latency(us) 00:20:10.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.236 =================================================================================================================== 00:20:10.236 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93145 00:20:10.236 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93145 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93236 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93236 /var/tmp/bperf.sock 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93236 ']' 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:10.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.495 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:10.495 [2024-07-15 22:14:57.368546] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:20:10.495 [2024-07-15 22:14:57.368647] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93236 ] 00:20:10.753 [2024-07-15 22:14:57.502805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.753 [2024-07-15 22:14:57.561687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.753 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.753 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:10.753 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:10.753 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:10.753 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:11.319 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:11.320 22:14:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:11.578 nvme0n1 00:20:11.578 22:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:11.578 22:14:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:11.578 Running I/O for 2 seconds... 00:20:13.533 00:20:13.533 Latency(us) 00:20:13.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.533 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:13.533 nvme0n1 : 2.00 19627.91 76.67 0.00 0.00 6514.13 2576.76 38606.66 00:20:13.533 =================================================================================================================== 00:20:13.533 Total : 19627.91 76.67 0.00 0.00 6514.13 2576.76 38606.66 00:20:13.533 0 00:20:13.533 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:13.533 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:13.533 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:13.533 | select(.opcode=="crc32c") 00:20:13.533 | "\(.module_name) \(.executed)"' 00:20:13.533 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:13.534 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93236 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93236 ']' 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93236 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93236 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:14.104 killing process with pid 93236 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93236' 00:20:14.104 Received shutdown signal, test time was about 2.000000 seconds 00:20:14.104 00:20:14.104 Latency(us) 00:20:14.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.104 =================================================================================================================== 00:20:14.104 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93236 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93236 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93307 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93307 /var/tmp/bperf.sock 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93307 ']' 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.104 22:15:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:14.104 [2024-07-15 22:15:01.028581] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:20:14.104 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:14.104 Zero copy mechanism will not be used. 00:20:14.104 [2024-07-15 22:15:01.028711] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93307 ] 00:20:14.362 [2024-07-15 22:15:01.183798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.362 [2024-07-15 22:15:01.274957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.620 22:15:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.620 22:15:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:14.620 22:15:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:14.620 22:15:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:14.620 22:15:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:14.878 22:15:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:14.878 22:15:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:15.135 nvme0n1 00:20:15.135 22:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:15.135 22:15:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:15.393 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:15.393 Zero copy mechanism will not be used. 00:20:15.393 Running I/O for 2 seconds... 00:20:17.310 00:20:17.310 Latency(us) 00:20:17.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.310 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:17.310 nvme0n1 : 2.00 5677.24 709.66 0.00 0.00 2811.28 1921.40 9472.93 00:20:17.310 =================================================================================================================== 00:20:17.310 Total : 5677.24 709.66 0.00 0.00 2811.28 1921.40 9472.93 00:20:17.310 0 00:20:17.310 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:17.310 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:17.310 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:17.310 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:17.310 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:17.310 | select(.opcode=="crc32c") 00:20:17.310 | "\(.module_name) \(.executed)"' 00:20:17.569 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:17.569 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:17.569 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:17.569 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:17.569 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93307 00:20:17.569 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93307 ']' 00:20:17.569 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93307 00:20:17.569 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:17.569 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:17.569 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93307 00:20:17.569 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:17.569 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:17.569 killing process with pid 93307 00:20:17.569 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93307' 00:20:17.569 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93307 00:20:17.570 Received shutdown signal, test time was about 2.000000 seconds 00:20:17.570 00:20:17.570 Latency(us) 00:20:17.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.570 =================================================================================================================== 00:20:17.570 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.570 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93307 00:20:17.829 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93005 00:20:17.829 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93005 ']' 00:20:17.829 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93005 00:20:17.829 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:17.829 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:17.829 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93005 00:20:17.829 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:17.829 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:17.829 killing process with pid 93005 00:20:17.829 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93005' 00:20:17.829 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93005 00:20:17.829 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93005 00:20:18.087 00:20:18.087 real 0m17.527s 00:20:18.087 user 0m33.903s 00:20:18.087 sys 0m4.383s 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:18.087 ************************************ 00:20:18.087 END TEST nvmf_digest_clean 00:20:18.087 ************************************ 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:18.087 ************************************ 00:20:18.087 START TEST nvmf_digest_error 00:20:18.087 ************************************ 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=93408 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 93408 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93408 ']' 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.087 22:15:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:18.087 [2024-07-15 22:15:04.954075] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:20:18.087 [2024-07-15 22:15:04.954197] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.345 [2024-07-15 22:15:05.098252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.345 [2024-07-15 22:15:05.169891] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.345 [2024-07-15 22:15:05.169946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.345 [2024-07-15 22:15:05.169960] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.345 [2024-07-15 22:15:05.169971] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.345 [2024-07-15 22:15:05.169980] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.345 [2024-07-15 22:15:05.170009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.280 22:15:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.280 22:15:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:19.280 22:15:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.280 22:15:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.280 22:15:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:19.280 [2024-07-15 22:15:06.014626] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:19.280 null0 00:20:19.280 [2024-07-15 22:15:06.089534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.280 [2024-07-15 22:15:06.113634] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93452 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93452 /var/tmp/bperf.sock 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93452 ']' 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:19.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:19.280 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:19.280 [2024-07-15 22:15:06.182644] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:20:19.280 [2024-07-15 22:15:06.182763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93452 ] 00:20:19.537 [2024-07-15 22:15:06.319982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.537 [2024-07-15 22:15:06.389556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.537 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.537 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:19.538 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:19.538 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:20.102 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:20.102 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.102 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:20.102 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.102 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:20.102 22:15:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:20.361 nvme0n1 00:20:20.361 22:15:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:20.361 22:15:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.361 22:15:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:20.361 22:15:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.361 22:15:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:20.361 22:15:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:20.361 Running I/O for 2 seconds... 00:20:20.361 [2024-07-15 22:15:07.220452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.361 [2024-07-15 22:15:07.220525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.361 [2024-07-15 22:15:07.220541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.361 [2024-07-15 22:15:07.232160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.361 [2024-07-15 22:15:07.232203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.361 [2024-07-15 22:15:07.232227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.361 [2024-07-15 22:15:07.248799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.361 [2024-07-15 22:15:07.248845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.361 [2024-07-15 22:15:07.248860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.361 [2024-07-15 22:15:07.260763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.361 [2024-07-15 22:15:07.260805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.361 [2024-07-15 22:15:07.260820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.361 [2024-07-15 22:15:07.276006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.361 [2024-07-15 22:15:07.276069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.361 [2024-07-15 22:15:07.276095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.361 [2024-07-15 22:15:07.289967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.361 [2024-07-15 22:15:07.290019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.361 [2024-07-15 22:15:07.290034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.361 [2024-07-15 22:15:07.304381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.361 [2024-07-15 22:15:07.304426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.361 [2024-07-15 22:15:07.304441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.320512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.320558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.320572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.333216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.333261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.333275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.348520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.348577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.348593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.363003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.363054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.363069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.378070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.378145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.378160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.390800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.390860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.390876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.404005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.404047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.404062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.419308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.419354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.419369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.433247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.433295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.433309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.446020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.446064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.446079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.460424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.460486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.460500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.474922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.474982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.474997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.488408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.488456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.488470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.504620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.504684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.504699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.519903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.519963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.519977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.533563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.533620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.533634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.549012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.549068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.549096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.620 [2024-07-15 22:15:07.563249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.620 [2024-07-15 22:15:07.563312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.620 [2024-07-15 22:15:07.563327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.879 [2024-07-15 22:15:07.575997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.879 [2024-07-15 22:15:07.576049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.879 [2024-07-15 22:15:07.576064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.879 [2024-07-15 22:15:07.593804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.879 [2024-07-15 22:15:07.593864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.879 [2024-07-15 22:15:07.593880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.879 [2024-07-15 22:15:07.606131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.879 [2024-07-15 22:15:07.606194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.879 [2024-07-15 22:15:07.606208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.879 [2024-07-15 22:15:07.620944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.879 [2024-07-15 22:15:07.620995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.879 [2024-07-15 22:15:07.621009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.880 [2024-07-15 22:15:07.635914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.880 [2024-07-15 22:15:07.635966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.880 [2024-07-15 22:15:07.635988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.880 [2024-07-15 22:15:07.651658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.880 [2024-07-15 22:15:07.651710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.880 [2024-07-15 22:15:07.651725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.880 [2024-07-15 22:15:07.664173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.880 [2024-07-15 22:15:07.664218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.880 [2024-07-15 22:15:07.664232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.880 [2024-07-15 22:15:07.678847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.880 [2024-07-15 22:15:07.678900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.880 [2024-07-15 22:15:07.678915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.880 [2024-07-15 22:15:07.693208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.880 [2024-07-15 22:15:07.693261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.880 [2024-07-15 22:15:07.693275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.880 [2024-07-15 22:15:07.706913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.880 [2024-07-15 22:15:07.706962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.880 [2024-07-15 22:15:07.706977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.880 [2024-07-15 22:15:07.722204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.880 [2024-07-15 22:15:07.722258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.880 [2024-07-15 22:15:07.722272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.880 [2024-07-15 22:15:07.736553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.880 [2024-07-15 22:15:07.736605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.880 [2024-07-15 22:15:07.736619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.880 [2024-07-15 22:15:07.751073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.880 [2024-07-15 22:15:07.751149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.880 [2024-07-15 22:15:07.751164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.880 [2024-07-15 22:15:07.766178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.880 [2024-07-15 22:15:07.766237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.880 [2024-07-15 22:15:07.766252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.880 [2024-07-15 22:15:07.778213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.880 [2024-07-15 22:15:07.778264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.880 [2024-07-15 22:15:07.778279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.880 [2024-07-15 22:15:07.792859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.880 [2024-07-15 22:15:07.792912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.880 [2024-07-15 22:15:07.792926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.880 [2024-07-15 22:15:07.807563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.880 [2024-07-15 22:15:07.807616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.880 [2024-07-15 22:15:07.807630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.880 [2024-07-15 22:15:07.820786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:20.880 [2024-07-15 22:15:07.820839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.880 [2024-07-15 22:15:07.820854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:07.836529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:07.836579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:07.836594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:07.852428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:07.852486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:07.852500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:07.864246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:07.864319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:07.864335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:07.877900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:07.877964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:07.877979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:07.893986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:07.894057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:07.894073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:07.907020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:07.907079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:07.907110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:07.922245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:07.922311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:07.922327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:07.937734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:07.937798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:07.937813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:07.950947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:07.951009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:07.951024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:07.963990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:07.964057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:07.964072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:07.980837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:07.980908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:07.980924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:07.995354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:07.995419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:07.995434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:08.010431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:08.010498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:08.010513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:08.024373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:08.024434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:08.024450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:08.036116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:08.036181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:08.036195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:08.050318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:08.050382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:08.050397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:08.065284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:08.065351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:08.065366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.139 [2024-07-15 22:15:08.080046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.139 [2024-07-15 22:15:08.080129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.139 [2024-07-15 22:15:08.080145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.398 [2024-07-15 22:15:08.094005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.398 [2024-07-15 22:15:08.094070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.398 [2024-07-15 22:15:08.094098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.398 [2024-07-15 22:15:08.109309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.398 [2024-07-15 22:15:08.109359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.109373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.123866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.123916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.123930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.135947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.135994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.136007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.149846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.149897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.149912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.166188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.166229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.166243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.179217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.179260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.179274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.195227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.195278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.195292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.210143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.210204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.210218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.223971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.224009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.224023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.237707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.237745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.237759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.250396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.250441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.250455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.267849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.267901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.267915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.280808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.280847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.280861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.296039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.296097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.296113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.307990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.308028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.308042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.323295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.323334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.323348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.399 [2024-07-15 22:15:08.337838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.399 [2024-07-15 22:15:08.337876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.399 [2024-07-15 22:15:08.337890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.658 [2024-07-15 22:15:08.353703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.658 [2024-07-15 22:15:08.353748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.658 [2024-07-15 22:15:08.353762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.658 [2024-07-15 22:15:08.367918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.658 [2024-07-15 22:15:08.367981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.658 [2024-07-15 22:15:08.367995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.658 [2024-07-15 22:15:08.380043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.658 [2024-07-15 22:15:08.380092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.658 [2024-07-15 22:15:08.380108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.658 [2024-07-15 22:15:08.393423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.658 [2024-07-15 22:15:08.393463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.658 [2024-07-15 22:15:08.393477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.658 [2024-07-15 22:15:08.407618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.658 [2024-07-15 22:15:08.407656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.658 [2024-07-15 22:15:08.407669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.658 [2024-07-15 22:15:08.421489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.658 [2024-07-15 22:15:08.421551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.658 [2024-07-15 22:15:08.421566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.658 [2024-07-15 22:15:08.436438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.658 [2024-07-15 22:15:08.436502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.658 [2024-07-15 22:15:08.436517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.658 [2024-07-15 22:15:08.451014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.658 [2024-07-15 22:15:08.451053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.658 [2024-07-15 22:15:08.451068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.658 [2024-07-15 22:15:08.463799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.658 [2024-07-15 22:15:08.463838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.658 [2024-07-15 22:15:08.463852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.658 [2024-07-15 22:15:08.477807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.658 [2024-07-15 22:15:08.477850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.658 [2024-07-15 22:15:08.477863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.658 [2024-07-15 22:15:08.492102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.658 [2024-07-15 22:15:08.492141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.658 [2024-07-15 22:15:08.492155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.658 [2024-07-15 22:15:08.507540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.658 [2024-07-15 22:15:08.507581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.659 [2024-07-15 22:15:08.507594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.659 [2024-07-15 22:15:08.521220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.659 [2024-07-15 22:15:08.521258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.659 [2024-07-15 22:15:08.521271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.659 [2024-07-15 22:15:08.533800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.659 [2024-07-15 22:15:08.533838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.659 [2024-07-15 22:15:08.533851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.659 [2024-07-15 22:15:08.548650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.659 [2024-07-15 22:15:08.548712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.659 [2024-07-15 22:15:08.548727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.659 [2024-07-15 22:15:08.563932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.659 [2024-07-15 22:15:08.563972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.659 [2024-07-15 22:15:08.563986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.659 [2024-07-15 22:15:08.579119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.659 [2024-07-15 22:15:08.579166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.659 [2024-07-15 22:15:08.579180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.659 [2024-07-15 22:15:08.592510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.659 [2024-07-15 22:15:08.592558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.659 [2024-07-15 22:15:08.592571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.937 [2024-07-15 22:15:08.608595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.937 [2024-07-15 22:15:08.608644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.937 [2024-07-15 22:15:08.608658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.937 [2024-07-15 22:15:08.620593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.937 [2024-07-15 22:15:08.620648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.937 [2024-07-15 22:15:08.620664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.937 [2024-07-15 22:15:08.634764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.937 [2024-07-15 22:15:08.634824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.937 [2024-07-15 22:15:08.634839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.937 [2024-07-15 22:15:08.649118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.937 [2024-07-15 22:15:08.649163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.937 [2024-07-15 22:15:08.649178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.937 [2024-07-15 22:15:08.664176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.937 [2024-07-15 22:15:08.664222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.937 [2024-07-15 22:15:08.664236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.937 [2024-07-15 22:15:08.678681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.937 [2024-07-15 22:15:08.678731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.937 [2024-07-15 22:15:08.678746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.937 [2024-07-15 22:15:08.694050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.937 [2024-07-15 22:15:08.694111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.937 [2024-07-15 22:15:08.694126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.937 [2024-07-15 22:15:08.707541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.937 [2024-07-15 22:15:08.707584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.938 [2024-07-15 22:15:08.707599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.938 [2024-07-15 22:15:08.720381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.938 [2024-07-15 22:15:08.720425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.938 [2024-07-15 22:15:08.720439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.938 [2024-07-15 22:15:08.735880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.938 [2024-07-15 22:15:08.735930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.938 [2024-07-15 22:15:08.735945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.938 [2024-07-15 22:15:08.750842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.938 [2024-07-15 22:15:08.750885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.938 [2024-07-15 22:15:08.750899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.938 [2024-07-15 22:15:08.764973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.938 [2024-07-15 22:15:08.765019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.938 [2024-07-15 22:15:08.765034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.938 [2024-07-15 22:15:08.782885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.938 [2024-07-15 22:15:08.782950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.938 [2024-07-15 22:15:08.782966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.938 [2024-07-15 22:15:08.797406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.938 [2024-07-15 22:15:08.797451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.938 [2024-07-15 22:15:08.797466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.938 [2024-07-15 22:15:08.812821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.938 [2024-07-15 22:15:08.812872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.938 [2024-07-15 22:15:08.812887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.938 [2024-07-15 22:15:08.825482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.938 [2024-07-15 22:15:08.825529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.938 [2024-07-15 22:15:08.825542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.938 [2024-07-15 22:15:08.840608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.938 [2024-07-15 22:15:08.840663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.938 [2024-07-15 22:15:08.840677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.938 [2024-07-15 22:15:08.855287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.938 [2024-07-15 22:15:08.855327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.938 [2024-07-15 22:15:08.855341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:21.938 [2024-07-15 22:15:08.870021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:21.938 [2024-07-15 22:15:08.870062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.938 [2024-07-15 22:15:08.870076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:08.885241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:08.885297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:08.885312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:08.898656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:08.898699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:08.898713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:08.913422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:08.913463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:08.913477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:08.927257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:08.927298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:08.927313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:08.941217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:08.941258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:08.941273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:08.956339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:08.956382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:08.956398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:08.968529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:08.968572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:08.968598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:08.983713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:08.983758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:08.983772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:08.997957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:08.998007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:08.998021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:09.012428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:09.012469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:09.012483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:09.026886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:09.026938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:09.026957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:09.039693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:09.039734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:09.039747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:09.054520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:09.054562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:09.054577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:09.069818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:09.069862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:09.069876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:09.084191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:09.084234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:09.084248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:09.098712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:09.098758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:09.098776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:09.113887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:09.113928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:09.113942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:09.129343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:09.129387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:09.129402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:09.143956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:09.144001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:09.144015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.219 [2024-07-15 22:15:09.159335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.219 [2024-07-15 22:15:09.159381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.219 [2024-07-15 22:15:09.159402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.478 [2024-07-15 22:15:09.174135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.478 [2024-07-15 22:15:09.174174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.478 [2024-07-15 22:15:09.174188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.478 [2024-07-15 22:15:09.186671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.478 [2024-07-15 22:15:09.186738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.478 [2024-07-15 22:15:09.186753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.478 [2024-07-15 22:15:09.204209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaf13e0) 00:20:22.478 [2024-07-15 22:15:09.204254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.478 [2024-07-15 22:15:09.204279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:22.478 00:20:22.478 Latency(us) 00:20:22.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.478 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:22.478 nvme0n1 : 2.01 17730.61 69.26 0.00 0.00 7210.82 3664.06 20375.74 00:20:22.478 =================================================================================================================== 00:20:22.478 Total : 17730.61 69.26 0.00 0.00 7210.82 3664.06 20375.74 00:20:22.478 0 00:20:22.478 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:22.478 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:22.478 | .driver_specific 00:20:22.478 | .nvme_error 00:20:22.478 | .status_code 00:20:22.478 | .command_transient_transport_error' 00:20:22.478 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:22.478 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:22.737 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 139 > 0 )) 00:20:22.737 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93452 00:20:22.737 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93452 ']' 00:20:22.737 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93452 00:20:22.737 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:22.737 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.737 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93452 00:20:22.737 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:22.737 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:22.737 killing process with pid 93452 00:20:22.737 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93452' 00:20:22.737 Received shutdown signal, test time was about 2.000000 seconds 00:20:22.737 00:20:22.737 Latency(us) 00:20:22.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.737 =================================================================================================================== 00:20:22.737 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:22.737 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93452 00:20:22.737 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93452 00:20:22.996 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:22.996 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:22.996 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:22.996 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:22.996 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:22.996 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93529 00:20:22.996 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93529 /var/tmp/bperf.sock 00:20:22.996 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93529 ']' 00:20:22.996 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:22.996 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:22.996 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:22.996 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:22.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:22.996 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:22.996 22:15:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:22.996 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:22.996 Zero copy mechanism will not be used. 00:20:22.996 [2024-07-15 22:15:09.770847] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:20:22.996 [2024-07-15 22:15:09.770932] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93529 ] 00:20:22.996 [2024-07-15 22:15:09.905700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.254 [2024-07-15 22:15:09.966429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.254 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.254 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:23.254 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:23.254 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:23.512 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:23.512 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.512 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:23.512 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.512 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:23.512 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:23.770 nvme0n1 00:20:23.770 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:23.770 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.770 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:23.770 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.770 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:23.770 22:15:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:24.028 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:24.028 Zero copy mechanism will not be used. 00:20:24.028 Running I/O for 2 seconds... 00:20:24.028 [2024-07-15 22:15:10.868015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.028 [2024-07-15 22:15:10.868073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.028 [2024-07-15 22:15:10.868104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.028 [2024-07-15 22:15:10.872813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.028 [2024-07-15 22:15:10.872853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.028 [2024-07-15 22:15:10.872868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.028 [2024-07-15 22:15:10.878013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.028 [2024-07-15 22:15:10.878057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.028 [2024-07-15 22:15:10.878071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.028 [2024-07-15 22:15:10.882626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.028 [2024-07-15 22:15:10.882668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.028 [2024-07-15 22:15:10.882683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.028 [2024-07-15 22:15:10.886114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.028 [2024-07-15 22:15:10.886151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.028 [2024-07-15 22:15:10.886165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.028 [2024-07-15 22:15:10.891785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.028 [2024-07-15 22:15:10.891826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.028 [2024-07-15 22:15:10.891840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.028 [2024-07-15 22:15:10.896823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.896868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.896883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.902183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.902223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.902237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.905267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.905305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.905319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.910652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.910691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.910706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.915792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.915833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.915847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.919476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.919513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.919527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.924606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.924670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.924696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.930151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.930197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.930212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.935191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.935233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.935248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.938607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.938667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.938690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.944942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.944995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.945017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.951176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.951225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.951246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.957209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.957260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.957280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.963633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.963683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.963705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.969442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.969492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.969513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.029 [2024-07-15 22:15:10.975934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.029 [2024-07-15 22:15:10.975988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.029 [2024-07-15 22:15:10.976017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.288 [2024-07-15 22:15:10.982743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.288 [2024-07-15 22:15:10.982799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.288 [2024-07-15 22:15:10.982820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.288 [2024-07-15 22:15:10.989377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.288 [2024-07-15 22:15:10.989425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.288 [2024-07-15 22:15:10.989447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.288 [2024-07-15 22:15:10.995442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.288 [2024-07-15 22:15:10.995483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.288 [2024-07-15 22:15:10.995497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.288 [2024-07-15 22:15:11.000929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.288 [2024-07-15 22:15:11.000970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.288 [2024-07-15 22:15:11.000984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.288 [2024-07-15 22:15:11.006013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.288 [2024-07-15 22:15:11.006053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.288 [2024-07-15 22:15:11.006067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.288 [2024-07-15 22:15:11.010713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.288 [2024-07-15 22:15:11.010762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.288 [2024-07-15 22:15:11.010778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.288 [2024-07-15 22:15:11.014119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.288 [2024-07-15 22:15:11.014157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.288 [2024-07-15 22:15:11.014171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.288 [2024-07-15 22:15:11.019019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.288 [2024-07-15 22:15:11.019060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.288 [2024-07-15 22:15:11.019075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.288 [2024-07-15 22:15:11.024635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.288 [2024-07-15 22:15:11.024687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.288 [2024-07-15 22:15:11.024702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.288 [2024-07-15 22:15:11.030156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.288 [2024-07-15 22:15:11.030215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.288 [2024-07-15 22:15:11.030230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.288 [2024-07-15 22:15:11.033853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.288 [2024-07-15 22:15:11.033894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.033908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.038203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.038259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.038274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.042460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.042501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.042515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.046991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.047036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.047053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.051780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.051821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.051836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.055549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.055590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.055604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.060137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.060176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.060189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.065200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.065242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.065256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.069375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.069419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.069433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.073914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.073955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.073970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.078703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.078749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.078764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.083351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.083391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.083404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.087058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.087120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.087134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.091267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.091306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.091320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.095904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.095943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.095956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.100027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.100068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.100097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.104643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.104698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.104714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.109030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.109102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.109120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.113195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.113250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.113265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.116875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.116927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.116942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.121917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.121985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.122001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.126345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.126422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.126439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.130719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.130772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.130787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.134867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.134922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.134938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.138233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.138281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.138296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.143139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.143196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.143210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.148290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.148348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.148363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.151675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.151730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.151744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.156324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.156385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.156408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.289 [2024-07-15 22:15:11.161106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.289 [2024-07-15 22:15:11.161158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.289 [2024-07-15 22:15:11.161173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.165595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.165635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.165649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.170442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.170483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.170497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.175535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.175596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.175611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.180663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.180720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.180735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.183915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.183958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.183973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.189176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.189230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.189245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.194123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.194178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.194193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.198813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.198869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.198885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.202392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.202442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.202457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.206980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.207033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.207049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.212103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.212151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.212166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.216653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.216705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.216720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.220677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.220737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.220752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.224806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.224856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.224871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.228851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.228914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.228933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.290 [2024-07-15 22:15:11.233695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.290 [2024-07-15 22:15:11.233755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.290 [2024-07-15 22:15:11.233770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.237391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.237431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.237446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.241413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.241451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.241464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.246423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.246473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.246487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.250480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.250524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.250539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.254516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.254569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.254583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.259388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.259440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.259455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.263123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.263167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.263181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.267459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.267511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.267526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.272015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.272065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.272099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.276551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.276601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.276616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.281031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.281093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.281110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.284953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.285002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.285017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.289158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.289203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.289218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.293048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.293108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.293124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.296601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.296648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.296670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.301130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.301176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.301190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.305225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.305270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.305285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.551 [2024-07-15 22:15:11.308939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.551 [2024-07-15 22:15:11.308984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.551 [2024-07-15 22:15:11.308997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.313223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.313270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.313285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.316972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.317018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.317033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.321039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.321100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.321116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.325724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.325776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.325791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.329345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.329389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.329403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.334226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.334278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.334293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.338562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.338609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.338623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.342292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.342348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.342364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.346650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.346699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.346713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.351143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.351200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.351216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.355190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.355252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.355267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.359323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.359372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.359387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.362692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.362736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.362750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.367677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.367731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.367745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.371172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.371217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.371230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.375616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.375676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.375696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.381138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.381191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.381205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.385835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.385887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.385902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.389278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.389332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.389354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.393749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.393802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.393816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.398639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.398688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.398707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.402414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.402453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.402467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.409318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.409392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.409421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.419605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.419713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.419748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.426023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.426110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.426130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.431906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.431970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.431988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.438109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.438173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.438190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.445884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.445960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.445979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.552 [2024-07-15 22:15:11.453034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.552 [2024-07-15 22:15:11.453123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.552 [2024-07-15 22:15:11.453143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.553 [2024-07-15 22:15:11.458778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.553 [2024-07-15 22:15:11.458843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.553 [2024-07-15 22:15:11.458862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.553 [2024-07-15 22:15:11.462691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.553 [2024-07-15 22:15:11.462743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.553 [2024-07-15 22:15:11.462760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.553 [2024-07-15 22:15:11.468535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.553 [2024-07-15 22:15:11.468597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.553 [2024-07-15 22:15:11.468616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.553 [2024-07-15 22:15:11.473492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.553 [2024-07-15 22:15:11.473559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.553 [2024-07-15 22:15:11.473578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.553 [2024-07-15 22:15:11.478625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.553 [2024-07-15 22:15:11.478684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.553 [2024-07-15 22:15:11.478702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.553 [2024-07-15 22:15:11.482972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.553 [2024-07-15 22:15:11.483028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.553 [2024-07-15 22:15:11.483045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.553 [2024-07-15 22:15:11.489053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.553 [2024-07-15 22:15:11.489137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.553 [2024-07-15 22:15:11.489155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.553 [2024-07-15 22:15:11.495366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.553 [2024-07-15 22:15:11.495432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.553 [2024-07-15 22:15:11.495456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.499475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.499542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.499567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.505289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.505351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.505368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.511450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.511522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.511539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.517710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.517773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.517790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.521995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.522050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.522066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.526898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.526954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.526976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.533038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.533122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.533140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.539919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.540018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.540050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.546160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.546229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.546247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.550386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.550440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.550458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.555908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.555968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.555993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.562346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.562428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.562449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.567047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.567116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.567134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.571298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.571353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.571371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.576681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.576745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.576762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.582746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.582811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.582828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.588679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.588745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.588763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.591978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.592027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.592043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.597410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.597474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.597491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.602793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.602862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.602884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.607903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.607962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.607981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.613392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.613450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.613468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.618150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.618205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.618222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.623349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.623410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.623436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.628492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.814 [2024-07-15 22:15:11.628552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.814 [2024-07-15 22:15:11.628569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.814 [2024-07-15 22:15:11.633262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.633316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.633333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.638757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.638818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.638835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.644062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.644137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.644156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.648869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.648937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.648954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.654284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.654341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.654359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.658932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.658994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.659012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.663143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.663189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.663203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.667234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.667280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.667294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.671186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.671238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.671253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.676001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.676050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.676065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.679500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.679546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.679560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.683249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.683294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.683309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.687681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.687729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.687744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.692244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.692304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.692320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.696552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.696589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.696603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.700422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.700459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.700473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.704140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.704177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.704191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.708329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.708372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.708386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.711967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.712006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.712019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.717165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.717203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.717217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.721190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.721238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.721252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.725400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.725437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.725452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.729687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.729724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.729737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.733278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.733316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.733330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.737848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.737886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.737900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.741804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.741843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.741857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.746599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.746637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.746651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.751313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.751350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.751364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:24.815 [2024-07-15 22:15:11.754353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.815 [2024-07-15 22:15:11.754390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.815 [2024-07-15 22:15:11.754404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:24.816 [2024-07-15 22:15:11.759210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:24.816 [2024-07-15 22:15:11.759246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.816 [2024-07-15 22:15:11.759260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.076 [2024-07-15 22:15:11.763791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.076 [2024-07-15 22:15:11.763829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.076 [2024-07-15 22:15:11.763843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.076 [2024-07-15 22:15:11.767233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.076 [2024-07-15 22:15:11.767270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.076 [2024-07-15 22:15:11.767284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.770979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.771016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.771030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.774870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.774906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.774919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.779483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.779521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.779534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.783407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.783456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.783472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.787786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.787825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.787839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.792311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.792350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.792364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.796095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.796130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.796144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.801044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.801095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.801111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.805550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.805587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.805600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.808765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.808803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.808816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.813648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.813687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.813701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.817192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.817229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.817242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.821324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.821362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.821376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.825276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.825314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.825328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.829934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.829973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.829988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.834456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.834492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.834506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.837497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.837534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.837547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.842185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.842228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.842241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.846115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.846155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.846169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.851782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.851826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.851841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.856358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.856396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.856410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.859445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.859482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.859496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.864109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.864146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.864160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.868335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.868374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.868393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.871799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.871839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.871853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.875783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.875822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.875837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.880329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.880368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.880382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.884303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.884341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.884354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.888423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.888465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.888479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.891844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.891885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.891900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.896404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.896442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.896455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.900502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.900539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.900553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.903939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.903975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.903989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.907588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.907626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.907640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.911667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.077 [2024-07-15 22:15:11.911703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.077 [2024-07-15 22:15:11.911717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.077 [2024-07-15 22:15:11.916065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.916115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.916129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.919577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.919615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.919629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.924325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.924363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.924376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.929655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.929694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.929708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.934526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.934564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.934577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.938075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.938136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.938150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.942893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.942934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.942948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.947896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.947936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.947949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.952376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.952414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.952428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.957170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.957210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.957223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.960511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.960549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.960563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.965522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.965561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.965575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.970344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.970381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.970395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.973730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.973769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.973782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.978351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.978392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.978407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.983250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.983288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.983302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.987985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.988024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.988037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.992745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.992784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.992798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:11.996490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:11.996529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:11.996542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:12.001284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:12.001332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:12.001350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:12.006040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:12.006092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:12.006108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:12.009598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:12.009653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:12.009677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:12.016214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:12.016267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:12.016300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.078 [2024-07-15 22:15:12.023099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.078 [2024-07-15 22:15:12.023136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.078 [2024-07-15 22:15:12.023150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.029393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.029433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.029447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.033414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.033453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.033471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.039400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.039454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.039477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.045838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.045893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.045917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.052008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.052048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.052063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.056918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.056956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.056971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.062876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.062916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.062930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.066598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.066639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.066652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.072380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.072435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.072460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.078116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.078155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.078169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.082785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.082823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.082836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.088311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.088351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.088372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.091966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.092003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.092017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.095793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.095830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.095843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.100819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.100860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.100874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.104458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.104496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.104509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.108788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.108826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.108840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.113205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.113242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.113257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.116587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.116624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.116638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.121634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.121673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.121687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.126251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.126288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.126301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.130015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.130053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.130066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.134017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.134055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.134069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.138563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.138601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.138615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.141930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.141974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.141989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.146843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.146884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.146897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.150772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.150810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.150824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.154968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.155005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.155019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.159545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.159583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.159597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.163459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.163500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.163514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.167359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.167396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.167410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.170976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.171014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.171028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.175101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.175137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.175150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.179670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.179709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.179723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.183202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.183238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.183252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.187174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.187210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.187223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.191230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.337 [2024-07-15 22:15:12.191267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.337 [2024-07-15 22:15:12.191280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.337 [2024-07-15 22:15:12.195560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.195596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.195609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.338 [2024-07-15 22:15:12.199356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.199393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.199407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.338 [2024-07-15 22:15:12.204288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.204348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.204373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.338 [2024-07-15 22:15:12.211425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.211477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.211501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.338 [2024-07-15 22:15:12.218947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.219007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.219033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.338 [2024-07-15 22:15:12.225550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.225605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.225630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.338 [2024-07-15 22:15:12.232536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.232625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.232658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.338 [2024-07-15 22:15:12.241701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.241753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.241771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.338 [2024-07-15 22:15:12.246927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.246976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.246994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.338 [2024-07-15 22:15:12.252592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.252640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.252657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.338 [2024-07-15 22:15:12.259269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.259340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.259367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.338 [2024-07-15 22:15:12.265724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.265778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.265795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.338 [2024-07-15 22:15:12.272150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.272200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.272218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.338 [2024-07-15 22:15:12.278470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.278520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.278537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.338 [2024-07-15 22:15:12.283432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.338 [2024-07-15 22:15:12.283481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.338 [2024-07-15 22:15:12.283498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.287529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.287584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.287603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.293502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.293555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.293581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.300167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.300215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.300233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.305542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.305590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.305607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.309631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.309682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.309700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.315130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.315177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.315194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.320789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.320837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.320854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.326581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.326635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.326659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.332326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.332372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.332388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.337208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.337257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.337274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.342205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.342251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.342268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.347716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.347764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.347781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.352223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.352268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.352303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.357475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.357521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.357538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.363005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.363052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.363069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.368749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.368795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.368812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.374182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.374226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.374243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.378302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.378348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.378364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.384179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.384225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.384247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.390187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.390234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.390251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.395857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.395908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.395925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.400355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.400404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.400422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.404168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.404212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.404228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.411706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.411770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.411800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.419236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.419285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.419303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.425933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.596 [2024-07-15 22:15:12.425996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.596 [2024-07-15 22:15:12.426025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.596 [2024-07-15 22:15:12.433062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.433146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.433177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.597 [2024-07-15 22:15:12.439314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.439360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.439377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.597 [2024-07-15 22:15:12.444900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.444959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.444984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.597 [2024-07-15 22:15:12.451011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.451062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.451103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.597 [2024-07-15 22:15:12.459058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.459151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.459179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.597 [2024-07-15 22:15:12.467318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.467392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.467419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.597 [2024-07-15 22:15:12.475472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.475552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.475579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.597 [2024-07-15 22:15:12.483305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.483375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.483402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.597 [2024-07-15 22:15:12.490980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.491041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.491066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.597 [2024-07-15 22:15:12.498538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.498602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.498627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.597 [2024-07-15 22:15:12.506357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.506427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.506454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.597 [2024-07-15 22:15:12.514035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.514118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.514171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.597 [2024-07-15 22:15:12.522044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.522126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.522153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.597 [2024-07-15 22:15:12.529811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.529876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.529902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.597 [2024-07-15 22:15:12.537729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.597 [2024-07-15 22:15:12.537793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.597 [2024-07-15 22:15:12.537817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.545418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.545481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.545512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.553467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.553535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.553559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.561336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.561396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.561420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.569179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.569239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.569262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.576885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.576947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.576970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.584729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.584795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.584819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.592173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.592233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.592258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.599752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.599815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.599841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.607665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.607739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.607765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.615263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.615325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.615350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.623119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.623189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.623213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.630932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.630999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.631024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.638495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.638555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.638579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.646104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.646162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.646187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.653419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.653478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.653502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.658347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.658407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.658432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.666242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.666306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.666329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.674202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.674260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.674282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.681759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.681825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.681851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.689639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.689700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.689725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.697615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.697679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.697705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.705424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.705486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.705510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.713013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.713099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.713128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.720636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.720703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.720727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.728315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.728378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.728403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.735957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.854 [2024-07-15 22:15:12.736039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.854 [2024-07-15 22:15:12.736068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.854 [2024-07-15 22:15:12.743583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.855 [2024-07-15 22:15:12.743647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.855 [2024-07-15 22:15:12.743674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.855 [2024-07-15 22:15:12.748639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.855 [2024-07-15 22:15:12.748710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.855 [2024-07-15 22:15:12.748736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.855 [2024-07-15 22:15:12.755764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.855 [2024-07-15 22:15:12.755812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.855 [2024-07-15 22:15:12.755829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.855 [2024-07-15 22:15:12.762198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.855 [2024-07-15 22:15:12.762246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.855 [2024-07-15 22:15:12.762263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.855 [2024-07-15 22:15:12.767475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.855 [2024-07-15 22:15:12.767514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.855 [2024-07-15 22:15:12.767528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.855 [2024-07-15 22:15:12.770411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.855 [2024-07-15 22:15:12.770449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.855 [2024-07-15 22:15:12.770462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.855 [2024-07-15 22:15:12.775371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.855 [2024-07-15 22:15:12.775410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.855 [2024-07-15 22:15:12.775424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.855 [2024-07-15 22:15:12.779150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.855 [2024-07-15 22:15:12.779195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.855 [2024-07-15 22:15:12.779211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.855 [2024-07-15 22:15:12.782991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.855 [2024-07-15 22:15:12.783032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.855 [2024-07-15 22:15:12.783046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:25.855 [2024-07-15 22:15:12.787167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.855 [2024-07-15 22:15:12.787210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.855 [2024-07-15 22:15:12.787224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:25.855 [2024-07-15 22:15:12.791386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.855 [2024-07-15 22:15:12.791432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.855 [2024-07-15 22:15:12.791447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:25.855 [2024-07-15 22:15:12.795807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.855 [2024-07-15 22:15:12.795846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.855 [2024-07-15 22:15:12.795860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:25.855 [2024-07-15 22:15:12.799955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:25.855 [2024-07-15 22:15:12.799995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.855 [2024-07-15 22:15:12.800009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.111 [2024-07-15 22:15:12.803889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:26.111 [2024-07-15 22:15:12.803929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.111 [2024-07-15 22:15:12.803943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.111 [2024-07-15 22:15:12.808832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:26.111 [2024-07-15 22:15:12.808872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.111 [2024-07-15 22:15:12.808886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.111 [2024-07-15 22:15:12.812614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:26.111 [2024-07-15 22:15:12.812668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.111 [2024-07-15 22:15:12.812685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.111 [2024-07-15 22:15:12.817284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:26.111 [2024-07-15 22:15:12.817325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.111 [2024-07-15 22:15:12.817340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.111 [2024-07-15 22:15:12.822923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:26.111 [2024-07-15 22:15:12.822962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.111 [2024-07-15 22:15:12.822976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.111 [2024-07-15 22:15:12.828373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:26.111 [2024-07-15 22:15:12.828414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.111 [2024-07-15 22:15:12.828429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.111 [2024-07-15 22:15:12.831868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:26.111 [2024-07-15 22:15:12.831909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.111 [2024-07-15 22:15:12.831924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.111 [2024-07-15 22:15:12.836380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:26.111 [2024-07-15 22:15:12.836419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.111 [2024-07-15 22:15:12.836433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.111 [2024-07-15 22:15:12.839962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:26.111 [2024-07-15 22:15:12.840000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.111 [2024-07-15 22:15:12.840013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.111 [2024-07-15 22:15:12.843750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:26.111 [2024-07-15 22:15:12.843788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.112 [2024-07-15 22:15:12.843802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.112 [2024-07-15 22:15:12.848421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:26.112 [2024-07-15 22:15:12.848459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.112 [2024-07-15 22:15:12.848474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.112 [2024-07-15 22:15:12.852156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:26.112 [2024-07-15 22:15:12.852197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.112 [2024-07-15 22:15:12.852211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.112 [2024-07-15 22:15:12.857400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2104380) 00:20:26.112 [2024-07-15 22:15:12.857463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.112 [2024-07-15 22:15:12.857491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.112 00:20:26.112 Latency(us) 00:20:26.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.112 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:26.112 nvme0n1 : 2.00 6103.57 762.95 0.00 0.00 2616.28 711.21 10128.29 00:20:26.112 =================================================================================================================== 00:20:26.112 Total : 6103.57 762.95 0.00 0.00 2616.28 711.21 10128.29 00:20:26.112 0 00:20:26.112 22:15:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:26.112 22:15:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:26.112 22:15:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:26.112 22:15:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:26.112 | .driver_specific 00:20:26.112 | .nvme_error 00:20:26.112 | .status_code 00:20:26.112 | .command_transient_transport_error' 00:20:26.367 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 394 > 0 )) 00:20:26.367 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93529 00:20:26.367 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93529 ']' 00:20:26.367 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93529 00:20:26.367 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:26.367 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:26.367 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93529 00:20:26.367 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:26.367 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:26.367 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93529' 00:20:26.367 killing process with pid 93529 00:20:26.367 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93529 00:20:26.367 Received shutdown signal, test time was about 2.000000 seconds 00:20:26.367 00:20:26.367 Latency(us) 00:20:26.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.367 =================================================================================================================== 00:20:26.367 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.368 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93529 00:20:26.623 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:26.623 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:26.623 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:26.623 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:26.623 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:26.623 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93600 00:20:26.623 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93600 /var/tmp/bperf.sock 00:20:26.623 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:26.623 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93600 ']' 00:20:26.623 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:26.623 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:26.623 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:26.623 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.623 22:15:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:26.623 [2024-07-15 22:15:13.460133] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:20:26.624 [2024-07-15 22:15:13.460229] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93600 ] 00:20:26.880 [2024-07-15 22:15:13.600544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.880 [2024-07-15 22:15:13.675367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.829 22:15:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:27.829 22:15:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:27.829 22:15:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:27.829 22:15:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:28.086 22:15:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:28.086 22:15:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.086 22:15:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:28.086 22:15:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.086 22:15:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:28.086 22:15:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:28.342 nvme0n1 00:20:28.342 22:15:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:28.342 22:15:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.343 22:15:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:28.343 22:15:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.343 22:15:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:28.343 22:15:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:28.618 Running I/O for 2 seconds... 00:20:28.618 [2024-07-15 22:15:15.398416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f6458 00:20:28.618 [2024-07-15 22:15:15.399531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.618 [2024-07-15 22:15:15.399573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:28.618 [2024-07-15 22:15:15.410827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190df988 00:20:28.618 [2024-07-15 22:15:15.412695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.618 [2024-07-15 22:15:15.412740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:28.618 [2024-07-15 22:15:15.423876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e8088 00:20:28.618 [2024-07-15 22:15:15.424988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.618 [2024-07-15 22:15:15.425033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:28.618 [2024-07-15 22:15:15.439683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f9b30 00:20:28.618 [2024-07-15 22:15:15.441454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.618 [2024-07-15 22:15:15.441497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:28.618 [2024-07-15 22:15:15.448858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f57b0 00:20:28.618 [2024-07-15 22:15:15.449655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.618 [2024-07-15 22:15:15.449692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:28.618 [2024-07-15 22:15:15.464529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ec408 00:20:28.618 [2024-07-15 22:15:15.465999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.618 [2024-07-15 22:15:15.466039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:28.618 [2024-07-15 22:15:15.476446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e1b48 00:20:28.618 [2024-07-15 22:15:15.478349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.618 [2024-07-15 22:15:15.478389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:28.618 [2024-07-15 22:15:15.489466] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ea248 00:20:28.618 [2024-07-15 22:15:15.490662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.618 [2024-07-15 22:15:15.490700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:28.618 [2024-07-15 22:15:15.505424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f7970 00:20:28.618 [2024-07-15 22:15:15.507287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.618 [2024-07-15 22:15:15.507330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:28.618 [2024-07-15 22:15:15.517164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f31b8 00:20:28.618 [2024-07-15 22:15:15.518780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.618 [2024-07-15 22:15:15.518826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.618 [2024-07-15 22:15:15.528917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190dece0 00:20:28.618 [2024-07-15 22:15:15.530332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.618 [2024-07-15 22:15:15.530372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:28.618 [2024-07-15 22:15:15.540201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ee190 00:20:28.618 [2024-07-15 22:15:15.541583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.618 [2024-07-15 22:15:15.541620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:28.618 [2024-07-15 22:15:15.552204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190dece0 00:20:28.618 [2024-07-15 22:15:15.554021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.618 [2024-07-15 22:15:15.554067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:28.618 [2024-07-15 22:15:15.565294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e84c0 00:20:28.618 [2024-07-15 22:15:15.566398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.618 [2024-07-15 22:15:15.566440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.581034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f57b0 00:20:28.876 [2024-07-15 22:15:15.582781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.582820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.590249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fac10 00:20:28.876 [2024-07-15 22:15:15.591012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.591057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.606378] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e6b70 00:20:28.876 [2024-07-15 22:15:15.608129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.608173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.615496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e99d8 00:20:28.876 [2024-07-15 22:15:15.616303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.616340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.631399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f0bc0 00:20:28.876 [2024-07-15 22:15:15.632870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.632909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.643221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f92c0 00:20:28.876 [2024-07-15 22:15:15.645149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.645197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.656292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fc128 00:20:28.876 [2024-07-15 22:15:15.657473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.657511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.672631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e49b0 00:20:28.876 [2024-07-15 22:15:15.674657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.674700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.682045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e4140 00:20:28.876 [2024-07-15 22:15:15.682938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.682975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.697233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e8088 00:20:28.876 [2024-07-15 22:15:15.698611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.698648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.706618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f81e0 00:20:28.876 [2024-07-15 22:15:15.707357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.707381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.720283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190eee38 00:20:28.876 [2024-07-15 22:15:15.721527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.721566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.732538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e7818 00:20:28.876 [2024-07-15 22:15:15.733759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.733806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.744027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190eee38 00:20:28.876 [2024-07-15 22:15:15.745129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.745163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.758730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f9b30 00:20:28.876 [2024-07-15 22:15:15.760670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.760711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.767372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ed4e8 00:20:28.876 [2024-07-15 22:15:15.768125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.768162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.780896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f35f0 00:20:28.876 [2024-07-15 22:15:15.782196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.782231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.795158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ef6a8 00:20:28.876 [2024-07-15 22:15:15.797080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.797126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.803790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fc128 00:20:28.876 [2024-07-15 22:15:15.804553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.804587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:28.876 [2024-07-15 22:15:15.818985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fb8b8 00:20:28.876 [2024-07-15 22:15:15.820751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:28.876 [2024-07-15 22:15:15.820784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.830901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f4f40 00:20:29.135 [2024-07-15 22:15:15.832645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.832678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.839509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fac10 00:20:29.135 [2024-07-15 22:15:15.840262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.840307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.853960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ed0b0 00:20:29.135 [2024-07-15 22:15:15.855399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.855430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.865154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ea680 00:20:29.135 [2024-07-15 22:15:15.866295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.866328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.876834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f0350 00:20:29.135 [2024-07-15 22:15:15.877972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.878005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.891377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f1ca0 00:20:29.135 [2024-07-15 22:15:15.893194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.893226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.899869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e0630 00:20:29.135 [2024-07-15 22:15:15.900714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.900746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.914244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e12d8 00:20:29.135 [2024-07-15 22:15:15.915736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.915768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.925371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e4578 00:20:29.135 [2024-07-15 22:15:15.926598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.926630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.937044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e12d8 00:20:29.135 [2024-07-15 22:15:15.938270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.938302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.949580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e23b8 00:20:29.135 [2024-07-15 22:15:15.950800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.950833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.961024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e12d8 00:20:29.135 [2024-07-15 22:15:15.962103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.962134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.974508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e0630 00:20:29.135 [2024-07-15 22:15:15.976048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.976091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.986688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e6738 00:20:29.135 [2024-07-15 22:15:15.988247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.988286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:15.996628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f9f68 00:20:29.135 [2024-07-15 22:15:15.997237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:15.997269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:16.009190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e7c50 00:20:29.135 [2024-07-15 22:15:16.010098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:16.010130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:16.020683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e0630 00:20:29.135 [2024-07-15 22:15:16.021445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:16.021481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:16.035983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f6458 00:20:29.135 [2024-07-15 22:15:16.037737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:16.037771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:16.046160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f1ca0 00:20:29.135 [2024-07-15 22:15:16.047544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:16.047575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:16.059773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ef6a8 00:20:29.135 [2024-07-15 22:15:16.061221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:16.061257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:29.135 [2024-07-15 22:15:16.071818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ef6a8 00:20:29.135 [2024-07-15 22:15:16.072931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.135 [2024-07-15 22:15:16.072963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:29.393 [2024-07-15 22:15:16.085893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fd640 00:20:29.393 [2024-07-15 22:15:16.087654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.393 [2024-07-15 22:15:16.087687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:29.393 [2024-07-15 22:15:16.098500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190eb328 00:20:29.393 [2024-07-15 22:15:16.100380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.393 [2024-07-15 22:15:16.100413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:29.393 [2024-07-15 22:15:16.107072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190dfdc0 00:20:29.393 [2024-07-15 22:15:16.108009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.393 [2024-07-15 22:15:16.108044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:29.393 [2024-07-15 22:15:16.119310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190eaab8 00:20:29.393 [2024-07-15 22:15:16.120229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.393 [2024-07-15 22:15:16.120262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:29.393 [2024-07-15 22:15:16.133374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f96f8 00:20:29.393 [2024-07-15 22:15:16.134951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.393 [2024-07-15 22:15:16.134983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:29.393 [2024-07-15 22:15:16.145902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ed4e8 00:20:29.393 [2024-07-15 22:15:16.147655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.393 [2024-07-15 22:15:16.147688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:29.393 [2024-07-15 22:15:16.154491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f8e88 00:20:29.393 [2024-07-15 22:15:16.155305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.393 [2024-07-15 22:15:16.155339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:29.393 [2024-07-15 22:15:16.166796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f4f40 00:20:29.393 [2024-07-15 22:15:16.167576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.393 [2024-07-15 22:15:16.167609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:29.393 [2024-07-15 22:15:16.180556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190de470 00:20:29.393 [2024-07-15 22:15:16.181964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.393 [2024-07-15 22:15:16.182001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:29.393 [2024-07-15 22:15:16.192485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e7c50 00:20:29.394 [2024-07-15 22:15:16.193581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.394 [2024-07-15 22:15:16.193615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:29.394 [2024-07-15 22:15:16.203887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f3e60 00:20:29.394 [2024-07-15 22:15:16.204836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.394 [2024-07-15 22:15:16.204873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:29.394 [2024-07-15 22:15:16.215374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e6738 00:20:29.394 [2024-07-15 22:15:16.216159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.394 [2024-07-15 22:15:16.216197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:29.394 [2024-07-15 22:15:16.230527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e0a68 00:20:29.394 [2024-07-15 22:15:16.232324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.394 [2024-07-15 22:15:16.232362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:29.394 [2024-07-15 22:15:16.239385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e88f8 00:20:29.394 [2024-07-15 22:15:16.240346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.394 [2024-07-15 22:15:16.240382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:29.394 [2024-07-15 22:15:16.254052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e0ea0 00:20:29.394 [2024-07-15 22:15:16.255732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.394 [2024-07-15 22:15:16.255781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:29.394 [2024-07-15 22:15:16.265744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e27f0 00:20:29.394 [2024-07-15 22:15:16.267106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.394 [2024-07-15 22:15:16.267141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:29.394 [2024-07-15 22:15:16.277692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fc560 00:20:29.394 [2024-07-15 22:15:16.278838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.394 [2024-07-15 22:15:16.278872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:29.394 [2024-07-15 22:15:16.289258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e0a68 00:20:29.394 [2024-07-15 22:15:16.290235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.394 [2024-07-15 22:15:16.290267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:29.394 [2024-07-15 22:15:16.300788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f7100 00:20:29.394 [2024-07-15 22:15:16.301620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.394 [2024-07-15 22:15:16.301651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:29.394 [2024-07-15 22:15:16.316148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ff3c8 00:20:29.394 [2024-07-15 22:15:16.318010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.394 [2024-07-15 22:15:16.318052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:29.394 [2024-07-15 22:15:16.328043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f46d0 00:20:29.394 [2024-07-15 22:15:16.329886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.394 [2024-07-15 22:15:16.329923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:29.394 [2024-07-15 22:15:16.336645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e9168 00:20:29.394 [2024-07-15 22:15:16.337484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.394 [2024-07-15 22:15:16.337516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.351234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e4140 00:20:29.652 [2024-07-15 22:15:16.352848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.352888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.362843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190eee38 00:20:29.652 [2024-07-15 22:15:16.364111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.364144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.375000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e4140 00:20:29.652 [2024-07-15 22:15:16.376390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.376430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.388297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ef6a8 00:20:29.652 [2024-07-15 22:15:16.389516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.389554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.399855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e4140 00:20:29.652 [2024-07-15 22:15:16.400958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.400993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.414626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f0788 00:20:29.652 [2024-07-15 22:15:16.416515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.416550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.423219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f57b0 00:20:29.652 [2024-07-15 22:15:16.424175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.424208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.437792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f7100 00:20:29.652 [2024-07-15 22:15:16.439434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.439475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.449252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ef6a8 00:20:29.652 [2024-07-15 22:15:16.450566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.450604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.461072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190eea00 00:20:29.652 [2024-07-15 22:15:16.462362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.462397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.475564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f8618 00:20:29.652 [2024-07-15 22:15:16.477558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.477595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.485561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fc128 00:20:29.652 [2024-07-15 22:15:16.486887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.486923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.500313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e0ea0 00:20:29.652 [2024-07-15 22:15:16.502319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.502360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.509002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ed920 00:20:29.652 [2024-07-15 22:15:16.510033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.510069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.521465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e84c0 00:20:29.652 [2024-07-15 22:15:16.522491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.522539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.533217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fc128 00:20:29.652 [2024-07-15 22:15:16.534095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.534133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.547839] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f7100 00:20:29.652 [2024-07-15 22:15:16.548957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.548995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.559522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fe720 00:20:29.652 [2024-07-15 22:15:16.560428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.652 [2024-07-15 22:15:16.560465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:29.652 [2024-07-15 22:15:16.571245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fd640 00:20:29.652 [2024-07-15 22:15:16.572042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.653 [2024-07-15 22:15:16.572078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:29.653 [2024-07-15 22:15:16.585600] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f6020 00:20:29.653 [2024-07-15 22:15:16.587204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.653 [2024-07-15 22:15:16.587243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.653 [2024-07-15 22:15:16.597437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e7818 00:20:29.653 [2024-07-15 22:15:16.598859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.653 [2024-07-15 22:15:16.598898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.609302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fb480 00:20:29.911 [2024-07-15 22:15:16.610581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.610624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.621116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e5a90 00:20:29.911 [2024-07-15 22:15:16.622226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.622266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.635973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e1710 00:20:29.911 [2024-07-15 22:15:16.637954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.637988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.644733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f0788 00:20:29.911 [2024-07-15 22:15:16.645682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.645717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.657239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ee5c8 00:20:29.911 [2024-07-15 22:15:16.658156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.658204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.668898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fcdd0 00:20:29.911 [2024-07-15 22:15:16.669678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.669713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.683405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f1ca0 00:20:29.911 [2024-07-15 22:15:16.684418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.684457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.694974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190efae0 00:20:29.911 [2024-07-15 22:15:16.695783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.695818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.706561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f6458 00:20:29.911 [2024-07-15 22:15:16.707226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.707260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.720459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190de038 00:20:29.911 [2024-07-15 22:15:16.721906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.721942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.731963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f9f68 00:20:29.911 [2024-07-15 22:15:16.733279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.733313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.743520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f46d0 00:20:29.911 [2024-07-15 22:15:16.744654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.744689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.755014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190eee38 00:20:29.911 [2024-07-15 22:15:16.755973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.756008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.766637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ddc00 00:20:29.911 [2024-07-15 22:15:16.767457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.767497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.782267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e01f8 00:20:29.911 [2024-07-15 22:15:16.784236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.784280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.790957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f0350 00:20:29.911 [2024-07-15 22:15:16.791917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.791949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.805586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190df550 00:20:29.911 [2024-07-15 22:15:16.807265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.807302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.817966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f2510 00:20:29.911 [2024-07-15 22:15:16.819614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.819646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.829464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ef6a8 00:20:29.911 [2024-07-15 22:15:16.830947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.830982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.840930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e9e10 00:20:29.911 [2024-07-15 22:15:16.842317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.842355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:29.911 [2024-07-15 22:15:16.852744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e0ea0 00:20:29.911 [2024-07-15 22:15:16.854076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:29.911 [2024-07-15 22:15:16.854125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:30.169 [2024-07-15 22:15:16.867269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e8088 00:20:30.169 [2024-07-15 22:15:16.869296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.169 [2024-07-15 22:15:16.869332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.169 [2024-07-15 22:15:16.875861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f8618 00:20:30.169 [2024-07-15 22:15:16.876898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.169 [2024-07-15 22:15:16.876932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:30.169 [2024-07-15 22:15:16.890381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e5ec8 00:20:30.169 [2024-07-15 22:15:16.892104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.169 [2024-07-15 22:15:16.892139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:30.169 [2024-07-15 22:15:16.902614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e0ea0 00:20:30.169 [2024-07-15 22:15:16.904338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.169 [2024-07-15 22:15:16.904373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:30.169 [2024-07-15 22:15:16.912528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190eff18 00:20:30.169 [2024-07-15 22:15:16.913314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.169 [2024-07-15 22:15:16.913353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:16.924858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190eb760 00:20:30.170 [2024-07-15 22:15:16.926121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:16.926161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:16.939467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f7538 00:20:30.170 [2024-07-15 22:15:16.941403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:16.941441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:16.948120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190df988 00:20:30.170 [2024-07-15 22:15:16.949068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:16.949112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:16.962640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ecc78 00:20:30.170 [2024-07-15 22:15:16.964262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:16.964312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:16.973914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f92c0 00:20:30.170 [2024-07-15 22:15:16.975333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:16.975370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:16.985711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f8e88 00:20:30.170 [2024-07-15 22:15:16.987047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:16.987097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:16.997829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fc128 00:20:30.170 [2024-07-15 22:15:16.998676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:16.998711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:17.009381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fb048 00:20:30.170 [2024-07-15 22:15:17.010109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:17.010143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:17.023186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e3060 00:20:30.170 [2024-07-15 22:15:17.024713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:17.024748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:17.034232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e5658 00:20:30.170 [2024-07-15 22:15:17.035625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:17.035660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:17.045995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f5378 00:20:30.170 [2024-07-15 22:15:17.047351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:17.047385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:17.057249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f1868 00:20:30.170 [2024-07-15 22:15:17.058343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:17.058378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:17.068985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f8618 00:20:30.170 [2024-07-15 22:15:17.070031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:17.070065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:17.081031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fcdd0 00:20:30.170 [2024-07-15 22:15:17.081603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:17.081636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:17.095985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e1f80 00:20:30.170 [2024-07-15 22:15:17.097896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:17.097930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:30.170 [2024-07-15 22:15:17.104546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f31b8 00:20:30.170 [2024-07-15 22:15:17.105474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.170 [2024-07-15 22:15:17.105505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.119075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e5a90 00:20:30.428 [2024-07-15 22:15:17.120723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.120759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.129876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f7100 00:20:30.428 [2024-07-15 22:15:17.130687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.130721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.142284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e2c28 00:20:30.428 [2024-07-15 22:15:17.143567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.143602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.155311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f0bc0 00:20:30.428 [2024-07-15 22:15:17.156695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.156736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.167216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f8a50 00:20:30.428 [2024-07-15 22:15:17.168378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.168413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.181673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e9e10 00:20:30.428 [2024-07-15 22:15:17.183460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.183493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.190328] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fd208 00:20:30.428 [2024-07-15 22:15:17.191115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.191147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.202774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fdeb0 00:20:30.428 [2024-07-15 22:15:17.203575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.203608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.214739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f5378 00:20:30.428 [2024-07-15 22:15:17.215533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.215567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.227574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f7da8 00:20:30.428 [2024-07-15 22:15:17.228548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.228580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.242337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e5ec8 00:20:30.428 [2024-07-15 22:15:17.243955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.243989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.253837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e3060 00:20:30.428 [2024-07-15 22:15:17.255072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.255124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.266061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e8d30 00:20:30.428 [2024-07-15 22:15:17.267309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.267342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.277711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190ed0b0 00:20:30.428 [2024-07-15 22:15:17.279565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.279598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.291520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190eaab8 00:20:30.428 [2024-07-15 22:15:17.293019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.293053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.303201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f2510 00:20:30.428 [2024-07-15 22:15:17.304490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.304534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.314694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e27f0 00:20:30.428 [2024-07-15 22:15:17.315843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.315878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:30.428 [2024-07-15 22:15:17.329629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fa7d8 00:20:30.428 [2024-07-15 22:15:17.331618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.428 [2024-07-15 22:15:17.331660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:30.429 [2024-07-15 22:15:17.338829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190fef90 00:20:30.429 [2024-07-15 22:15:17.339852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.429 [2024-07-15 22:15:17.339890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:30.429 [2024-07-15 22:15:17.351298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f8a50 00:20:30.429 [2024-07-15 22:15:17.352305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.429 [2024-07-15 22:15:17.352343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:30.429 [2024-07-15 22:15:17.362911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190f8e88 00:20:30.429 [2024-07-15 22:15:17.363741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.429 [2024-07-15 22:15:17.363780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:30.686 [2024-07-15 22:15:17.377197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41880) with pdu=0x2000190e9e10 00:20:30.686 [2024-07-15 22:15:17.378239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.686 [2024-07-15 22:15:17.378279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:30.686 00:20:30.686 Latency(us) 00:20:30.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.686 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:30.686 nvme0n1 : 2.01 20682.75 80.79 0.00 0.00 6178.06 2487.39 16443.58 00:20:30.686 =================================================================================================================== 00:20:30.686 Total : 20682.75 80.79 0.00 0.00 6178.06 2487.39 16443.58 00:20:30.686 0 00:20:30.686 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:30.686 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:30.686 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:30.686 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:30.686 | .driver_specific 00:20:30.686 | .nvme_error 00:20:30.686 | .status_code 00:20:30.686 | .command_transient_transport_error' 00:20:30.943 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:20:30.943 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93600 00:20:30.944 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93600 ']' 00:20:30.944 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93600 00:20:30.944 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:30.944 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:30.944 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93600 00:20:30.944 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:30.944 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:30.944 killing process with pid 93600 00:20:30.944 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93600' 00:20:30.944 Received shutdown signal, test time was about 2.000000 seconds 00:20:30.944 00:20:30.944 Latency(us) 00:20:30.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.944 =================================================================================================================== 00:20:30.944 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:30.944 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93600 00:20:30.944 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93600 00:20:31.201 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:31.201 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:31.201 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:31.201 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:31.201 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:31.201 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93695 00:20:31.201 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:31.201 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93695 /var/tmp/bperf.sock 00:20:31.201 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93695 ']' 00:20:31.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:31.201 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:31.202 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.202 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:31.202 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.202 22:15:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:31.202 [2024-07-15 22:15:18.006160] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:20:31.202 [2024-07-15 22:15:18.020437] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93695 ] 00:20:31.202 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:31.202 Zero copy mechanism will not be used. 00:20:31.459 [2024-07-15 22:15:18.155550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.459 [2024-07-15 22:15:18.225179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.459 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.459 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:31.459 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:31.459 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:31.717 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:31.717 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.717 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:31.717 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.717 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:31.717 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:32.308 nvme0n1 00:20:32.308 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:32.308 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.308 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:32.308 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.308 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:32.308 22:15:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:32.308 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:32.308 Zero copy mechanism will not be used. 00:20:32.308 Running I/O for 2 seconds... 00:20:32.308 [2024-07-15 22:15:19.131103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.308 [2024-07-15 22:15:19.131511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.308 [2024-07-15 22:15:19.131555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.308 [2024-07-15 22:15:19.137162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.308 [2024-07-15 22:15:19.137559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.308 [2024-07-15 22:15:19.137616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.308 [2024-07-15 22:15:19.143651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.308 [2024-07-15 22:15:19.144033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.308 [2024-07-15 22:15:19.144122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.308 [2024-07-15 22:15:19.150357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.308 [2024-07-15 22:15:19.150769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.308 [2024-07-15 22:15:19.150832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.308 [2024-07-15 22:15:19.156816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.308 [2024-07-15 22:15:19.157236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.308 [2024-07-15 22:15:19.157301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.308 [2024-07-15 22:15:19.163476] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.308 [2024-07-15 22:15:19.163873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.308 [2024-07-15 22:15:19.163929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.308 [2024-07-15 22:15:19.170637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.308 [2024-07-15 22:15:19.171074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.308 [2024-07-15 22:15:19.171171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.308 [2024-07-15 22:15:19.177288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.308 [2024-07-15 22:15:19.177699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.308 [2024-07-15 22:15:19.177758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.308 [2024-07-15 22:15:19.183425] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.308 [2024-07-15 22:15:19.183823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.308 [2024-07-15 22:15:19.183876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.308 [2024-07-15 22:15:19.189022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.308 [2024-07-15 22:15:19.189415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.308 [2024-07-15 22:15:19.189469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.308 [2024-07-15 22:15:19.194864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.308 [2024-07-15 22:15:19.195293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.308 [2024-07-15 22:15:19.195358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.308 [2024-07-15 22:15:19.201223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.309 [2024-07-15 22:15:19.201614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.309 [2024-07-15 22:15:19.201681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.309 [2024-07-15 22:15:19.207073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.309 [2024-07-15 22:15:19.207441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.309 [2024-07-15 22:15:19.207492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.309 [2024-07-15 22:15:19.212569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.309 [2024-07-15 22:15:19.212868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.309 [2024-07-15 22:15:19.212926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.309 [2024-07-15 22:15:19.217974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.309 [2024-07-15 22:15:19.218294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.309 [2024-07-15 22:15:19.218344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.309 [2024-07-15 22:15:19.223808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.309 [2024-07-15 22:15:19.224138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.309 [2024-07-15 22:15:19.224190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.309 [2024-07-15 22:15:19.229175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.309 [2024-07-15 22:15:19.229470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.309 [2024-07-15 22:15:19.229521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.309 [2024-07-15 22:15:19.234766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.309 [2024-07-15 22:15:19.235068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.309 [2024-07-15 22:15:19.235144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.240266] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.240543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.240591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.245978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.246275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.246327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.251002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.251324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.251383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.256324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.256685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.256735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.261814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.262116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.262166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.269718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.270029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.270100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.279530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.279844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.279896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.285739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.285859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.285904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.290741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.290920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.290962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.296143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.296239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.296293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.301289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.301468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.301507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.306416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.306530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.306573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.311597] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.311721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.311764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.317198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.317388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.317431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.323856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.324038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.324108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.329726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.329862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.329906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.335019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.335169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.335208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.340686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.340820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.340857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.346312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.346433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.569 [2024-07-15 22:15:19.346464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.569 [2024-07-15 22:15:19.351563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.569 [2024-07-15 22:15:19.351683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.351715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.356851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.356972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.357008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.362529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.362689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.362732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.367620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.367740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.367771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.373158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.373321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.373356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.378577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.378955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.379007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.384989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.385134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.385172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.390335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.390474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.390514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.396146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.396265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.396326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.401546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.401677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.401716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.407054] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.407198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.407239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.412504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.412618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.412647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.417735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.417889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.417919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.423095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.423210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.423239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.428626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.428742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.428780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.433938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.434159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.434202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.439284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.439426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.439464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.445075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.445240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.445276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.451824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.451991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.452027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.458454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.458580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.458611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.463741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.463889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.463926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.469282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.469426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.469457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.474701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.474810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.474841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.480306] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.480418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.480452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.485514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.485619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.485655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.490556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.490677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.490707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.495993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.496137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.496181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.501460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.501565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.501594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.506972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.507126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.570 [2024-07-15 22:15:19.507162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.570 [2024-07-15 22:15:19.512431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.570 [2024-07-15 22:15:19.512571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.571 [2024-07-15 22:15:19.512615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.518907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.519061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.519119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.527512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.527688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.527726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.533656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.533813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.533855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.540063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.540233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.540268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.545721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.545868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.545905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.551358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.551481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.551518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.557289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.557389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.557421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.562720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.562873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.562908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.568586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.568707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.568744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.573873] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.574002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.574033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.580471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.580589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.580629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.585963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.586113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.586153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.591532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.591676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.591713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.597199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.597313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.597348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.602880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.603008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.603041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.608802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.608943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.608981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.614401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.614514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.614546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.620893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.621103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.621145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.626813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.626942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.626972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.632136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.632240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.632268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.637787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.637909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.637939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.643886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.644038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.644066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.650014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.650162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.650198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.656364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.656515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.656552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.661731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.661880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.661912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.667501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.667603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.667633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.831 [2024-07-15 22:15:19.672499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.831 [2024-07-15 22:15:19.672605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.831 [2024-07-15 22:15:19.672632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.678367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.678491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.678536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.683766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.683897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.683941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.689513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.689628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.689662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.695620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.695737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.695768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.700970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.701068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.701114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.706354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.706476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.706515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.711914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.712050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.712101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.717472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.717590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.717626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.722740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.722834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.722862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.728240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.728376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.728407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.733769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.733890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.733921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.741616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.741781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.741819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.749885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.750017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.750056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.755187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.755320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.755364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.760462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.760578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.760617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.765694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.765810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.765842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:32.832 [2024-07-15 22:15:19.771566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:32.832 [2024-07-15 22:15:19.771725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.832 [2024-07-15 22:15:19.771763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.090 [2024-07-15 22:15:19.778611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.090 [2024-07-15 22:15:19.778781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.090 [2024-07-15 22:15:19.778813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.090 [2024-07-15 22:15:19.785280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.090 [2024-07-15 22:15:19.785434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.090 [2024-07-15 22:15:19.785467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.090 [2024-07-15 22:15:19.790601] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.090 [2024-07-15 22:15:19.790718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.090 [2024-07-15 22:15:19.790750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.090 [2024-07-15 22:15:19.796035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.090 [2024-07-15 22:15:19.796167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.090 [2024-07-15 22:15:19.796208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.090 [2024-07-15 22:15:19.801626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.090 [2024-07-15 22:15:19.801740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.090 [2024-07-15 22:15:19.801784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.090 [2024-07-15 22:15:19.807269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.090 [2024-07-15 22:15:19.807399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.090 [2024-07-15 22:15:19.807435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.090 [2024-07-15 22:15:19.812988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.090 [2024-07-15 22:15:19.813132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.090 [2024-07-15 22:15:19.813167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.818159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.818273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.818308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.823472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.823568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.823598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.828943] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.829071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.829123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.835925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.836107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.836144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.842377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.842545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.842584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.848818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.848975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.849009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.855255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.855397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.855433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.861742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.861867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.861903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.868178] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.868332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.868372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.874753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.874912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.874943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.881432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.881579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.881620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.887952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.888098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.888130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.893153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.893268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.893300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.898556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.898717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.898754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.903997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.904155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.904191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.909782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.909886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.909914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.915188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.915316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.915346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.920892] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.921033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.921062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.926411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.926560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.926597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.931883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.932032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.932064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.937554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.937665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.937697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.943256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.943406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.943443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.948416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.948518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.948552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.954012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.954207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.954245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.959449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.959573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.959608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.964937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.965053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.965096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.970246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.970368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.970405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.975841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.975969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.976003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.981217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.981340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.981375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.986854] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.986967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.987003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.992854] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.992995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.993025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:19.999519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:19.999684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:19.999724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:20.010219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:20.010370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:20.010403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:20.016884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:20.017007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:20.017044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:20.022472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:20.022582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:20.022619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:20.027954] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:20.028100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:20.028130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:20.033624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.091 [2024-07-15 22:15:20.033787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.091 [2024-07-15 22:15:20.033820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.091 [2024-07-15 22:15:20.039288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.039419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.039449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.044715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.044846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.044881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.049927] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.050038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.050066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.056056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.056187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.056220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.061382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.061607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.061646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.066708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.067334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.067391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.073254] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.073413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.073448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.078745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.078891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.078934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.084329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.084432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.084465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.090183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.090361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.090393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.096443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.096613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.096649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.101582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.101733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.101766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.107215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.107350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.107381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.112646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.112804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.112839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.118180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.118317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.118357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.124607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.124741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.124780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.130148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.130293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.130329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.135501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.135618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.135651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.140832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.140963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.141002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.146614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.146748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.146778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.151845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.151945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.151970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.157514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.157641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.157681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.162649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.162794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.162824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.167858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.167991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.168020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.173529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.350 [2024-07-15 22:15:20.173660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.350 [2024-07-15 22:15:20.173697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.350 [2024-07-15 22:15:20.178935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.179050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.179100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.184633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.184754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.184793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.189997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.190138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.190171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.195436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.195552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.195586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.201512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.201695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.201738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.207163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.207296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.207332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.212237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.212361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.212391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.217643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.217802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.217838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.223268] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.223441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.223476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.229414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.229558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.229589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.234886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.234999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.235036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.240382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.240540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.240572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.245882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.246008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.246039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.251401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.251500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.251529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.256868] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.256971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.257001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.262404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.262540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.262577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.267913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.268041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.268071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.273275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.273399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.273434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.278830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.278964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.278993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.284531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.284655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.284694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.289906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.290237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.290285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.351 [2024-07-15 22:15:20.295966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.351 [2024-07-15 22:15:20.296169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.351 [2024-07-15 22:15:20.296209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.610 [2024-07-15 22:15:20.302494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.610 [2024-07-15 22:15:20.302650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.610 [2024-07-15 22:15:20.302685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.610 [2024-07-15 22:15:20.307743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.610 [2024-07-15 22:15:20.307835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.307860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.313334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.313451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.313485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.319726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.319865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.319895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.326554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.326738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.326775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.331684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.331813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.331846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.338043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.338220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.338262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.345206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.345408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.345452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.350607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.350779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.350816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.356363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.356471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.356498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.361625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.361802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.361832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.367014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.367139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.367170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.372484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.372635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.372668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.377751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.377902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.377936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.383251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.383380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.383412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.388857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.388996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.389033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.394529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.394662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.394701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.399698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.399836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.399878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.405837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.406015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.406059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.411885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.412021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.412062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.417952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.418077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.418126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.423349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.423493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.423540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.429313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.429449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.429488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.434930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.435048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.435094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.440468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.440607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.440650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.449415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.449543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.449582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.454994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.455119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.455152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.460679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.460819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.460853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.467220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.467335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.467373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.472467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.611 [2024-07-15 22:15:20.472596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.611 [2024-07-15 22:15:20.472639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.611 [2024-07-15 22:15:20.477698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.612 [2024-07-15 22:15:20.477839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.612 [2024-07-15 22:15:20.477879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.612 [2024-07-15 22:15:20.483297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.612 [2024-07-15 22:15:20.483427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.612 [2024-07-15 22:15:20.483468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.612 [2024-07-15 22:15:20.489441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.612 [2024-07-15 22:15:20.489865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.612 [2024-07-15 22:15:20.489921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.612 [2024-07-15 22:15:20.495528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.612 [2024-07-15 22:15:20.495689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.612 [2024-07-15 22:15:20.495719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.612 [2024-07-15 22:15:20.502442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.612 [2024-07-15 22:15:20.502610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.612 [2024-07-15 22:15:20.502639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.612 [2024-07-15 22:15:20.508021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.612 [2024-07-15 22:15:20.508290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.612 [2024-07-15 22:15:20.508326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.612 [2024-07-15 22:15:20.514407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.612 [2024-07-15 22:15:20.514645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.612 [2024-07-15 22:15:20.514685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.612 [2024-07-15 22:15:20.520141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.612 [2024-07-15 22:15:20.520324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.612 [2024-07-15 22:15:20.520360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.612 [2024-07-15 22:15:20.524978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.612 [2024-07-15 22:15:20.525135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.612 [2024-07-15 22:15:20.525158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.612 [2024-07-15 22:15:20.530565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.612 [2024-07-15 22:15:20.530744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.612 [2024-07-15 22:15:20.530777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.612 [2024-07-15 22:15:20.537507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.612 [2024-07-15 22:15:20.537703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.612 [2024-07-15 22:15:20.537740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.612 [2024-07-15 22:15:20.543106] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.612 [2024-07-15 22:15:20.543276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.612 [2024-07-15 22:15:20.543307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.612 [2024-07-15 22:15:20.548785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.612 [2024-07-15 22:15:20.549046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.612 [2024-07-15 22:15:20.549112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.612 [2024-07-15 22:15:20.554348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.612 [2024-07-15 22:15:20.554584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.612 [2024-07-15 22:15:20.554623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.559966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.560174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.560205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.565522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.565681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.565711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.571063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.571239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.571273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.576616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.576850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.576889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.581726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.581862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.581892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.586804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.586946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.586979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.592249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.592402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.592429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.597235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.597360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.597384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.602061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.602201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.602225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.606886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.607025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.607048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.611766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.611914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.611939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.616641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.616797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.616821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.621918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.622074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.622120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.629234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.629426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.629455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.635515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.635677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.635705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.642459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.642652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.642682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.649426] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.649603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.649633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.654499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.654624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.654648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.659516] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.659661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.659685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.664449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.664584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.664608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.669591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.669750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.669782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.674407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.674550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.674574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.679212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.679378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.679408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.684253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.684404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.684434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.689148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.873 [2024-07-15 22:15:20.689287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.873 [2024-07-15 22:15:20.689318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.873 [2024-07-15 22:15:20.693936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.694109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.694141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.698795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.698958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.698984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.703999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.704162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.704192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.708880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.709028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.709058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.713767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.713919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.713946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.718789] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.718917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.718949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.723534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.723658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.723683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.728504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.728628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.728652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.733356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.733504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.733528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.739033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.739194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.739219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.744750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.744876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.744900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.749553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.749690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.749713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.754421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.754559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.754583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.759197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.759322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.759346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.763921] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.764045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.764069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.768755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.768933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.768970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.773874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.774001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.774032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.778575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.778718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.778748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.783323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.783448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.783479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.788220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.788362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.788393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.793060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.793222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.793263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.797857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.798002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.798033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.802676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.802801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.802832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.807923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.808051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.808077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.812753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.812877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.812907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.874 [2024-07-15 22:15:20.817764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:33.874 [2024-07-15 22:15:20.817914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.874 [2024-07-15 22:15:20.817945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.822529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.822670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.822700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.827302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.827427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.827455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.832101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.832227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.832254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.837027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.837189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.837217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.841968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.842152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.842182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.847179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.847305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.847328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.852027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.852166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.852189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.857180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.857355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.857382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.862502] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.862630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.862668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.867343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.867479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.867508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.872286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.872419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.872449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.877620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.877765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.877798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.882578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.882716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.882746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.887358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.887505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.887548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.892301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.892475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.892520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.897188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.897361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.897406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.902107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.902287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.902323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.906857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.907013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.907050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.912169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.912367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.912402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.917596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.917759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.917789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.922889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.923040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.923071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.927695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.927854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.927884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.932570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.932733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.932762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.937446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.937596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.134 [2024-07-15 22:15:20.937626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.134 [2024-07-15 22:15:20.942454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.134 [2024-07-15 22:15:20.942698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:20.942731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:20.947953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:20.948111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:20.948137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:20.953214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:20.953363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:20.953387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:20.957992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:20.958146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:20.958170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:20.962722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:20.962857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:20.962879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:20.967535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:20.967659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:20.967681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:20.972291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:20.972428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:20.972450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:20.977262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:20.977402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:20.977426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:20.982006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:20.982143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:20.982167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:20.987121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:20.987262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:20.987286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:20.991945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:20.992100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:20.992124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:20.996782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:20.996920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:20.996942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.001697] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.001840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.001861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.006532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.006655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.006680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.011273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.011393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.011415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.015990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.016137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.016160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.020797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.020936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.020959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.025995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.026157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.026182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.030769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.030891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.030914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.035579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.035702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.035725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.040395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.040517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.040538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.045218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.045339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.045362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.049973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.050127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.050149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.054760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.054896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.054920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.059772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.059912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.059941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.064917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.065145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.065170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.070503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.070750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.070791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.075558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.075818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.075856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.135 [2024-07-15 22:15:21.081120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.135 [2024-07-15 22:15:21.081384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.135 [2024-07-15 22:15:21.081526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.394 [2024-07-15 22:15:21.086209] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.394 [2024-07-15 22:15:21.086357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.394 [2024-07-15 22:15:21.086380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.394 [2024-07-15 22:15:21.091162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.394 [2024-07-15 22:15:21.091308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.394 [2024-07-15 22:15:21.091331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.394 [2024-07-15 22:15:21.096464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.394 [2024-07-15 22:15:21.096661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.394 [2024-07-15 22:15:21.096693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.394 [2024-07-15 22:15:21.102150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.394 [2024-07-15 22:15:21.102329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.394 [2024-07-15 22:15:21.102364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.394 [2024-07-15 22:15:21.106955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.394 [2024-07-15 22:15:21.107133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.394 [2024-07-15 22:15:21.107166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:34.394 [2024-07-15 22:15:21.111755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.394 [2024-07-15 22:15:21.111898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.394 [2024-07-15 22:15:21.111932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.394 [2024-07-15 22:15:21.116572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.394 [2024-07-15 22:15:21.116714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.394 [2024-07-15 22:15:21.116755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.394 [2024-07-15 22:15:21.121323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f41bc0) with pdu=0x2000190fef90 00:20:34.394 [2024-07-15 22:15:21.121475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.394 [2024-07-15 22:15:21.121509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:34.394 00:20:34.394 Latency(us) 00:20:34.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.394 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:34.394 nvme0n1 : 2.00 5554.52 694.31 0.00 0.00 2873.59 2129.92 10843.23 00:20:34.394 =================================================================================================================== 00:20:34.394 Total : 5554.52 694.31 0.00 0.00 2873.59 2129.92 10843.23 00:20:34.394 0 00:20:34.395 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:34.395 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:34.395 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:34.395 | .driver_specific 00:20:34.395 | .nvme_error 00:20:34.395 | .status_code 00:20:34.395 | .command_transient_transport_error' 00:20:34.395 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:34.652 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 358 > 0 )) 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93695 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93695 ']' 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93695 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93695 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:34.653 killing process with pid 93695 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93695' 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93695 00:20:34.653 Received shutdown signal, test time was about 2.000000 seconds 00:20:34.653 00:20:34.653 Latency(us) 00:20:34.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.653 =================================================================================================================== 00:20:34.653 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93695 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93408 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93408 ']' 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93408 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:34.653 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93408 00:20:34.910 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:34.910 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:34.910 killing process with pid 93408 00:20:34.910 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93408' 00:20:34.910 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93408 00:20:34.910 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93408 00:20:34.910 00:20:34.910 real 0m16.893s 00:20:34.910 user 0m32.392s 00:20:34.910 sys 0m4.425s 00:20:34.910 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:34.910 ************************************ 00:20:34.910 END TEST nvmf_digest_error 00:20:34.910 ************************************ 00:20:34.910 22:15:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:34.910 22:15:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:20:34.910 22:15:21 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:34.910 22:15:21 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:34.910 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:34.910 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:35.169 rmmod nvme_tcp 00:20:35.169 rmmod nvme_fabrics 00:20:35.169 rmmod nvme_keyring 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 93408 ']' 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 93408 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 93408 ']' 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 93408 00:20:35.169 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93408) - No such process 00:20:35.169 Process with pid 93408 is not found 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 93408 is not found' 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:35.169 00:20:35.169 real 0m35.130s 00:20:35.169 user 1m6.464s 00:20:35.169 sys 0m9.126s 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:35.169 ************************************ 00:20:35.169 END TEST nvmf_digest 00:20:35.169 ************************************ 00:20:35.169 22:15:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:35.169 22:15:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:35.169 22:15:21 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:20:35.169 22:15:21 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:20:35.169 22:15:21 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:35.169 22:15:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:35.169 22:15:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:35.169 22:15:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:35.169 ************************************ 00:20:35.169 START TEST nvmf_mdns_discovery 00:20:35.169 ************************************ 00:20:35.169 22:15:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:35.169 * Looking for test storage... 00:20:35.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:35.169 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:35.170 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:35.428 Cannot find device "nvmf_tgt_br" 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:35.428 Cannot find device "nvmf_tgt_br2" 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:35.428 Cannot find device "nvmf_tgt_br" 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:35.428 Cannot find device "nvmf_tgt_br2" 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:35.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:35.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:35.428 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:35.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:20:35.686 00:20:35.686 --- 10.0.0.2 ping statistics --- 00:20:35.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.686 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:35.686 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:35.686 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:35.686 00:20:35.686 --- 10.0.0.3 ping statistics --- 00:20:35.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.686 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:35.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:20:35.686 00:20:35.686 --- 10.0.0.1 ping statistics --- 00:20:35.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.686 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=93967 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 93967 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 93967 ']' 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.686 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.686 [2024-07-15 22:15:22.519609] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:20:35.686 [2024-07-15 22:15:22.519701] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.943 [2024-07-15 22:15:22.659802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.943 [2024-07-15 22:15:22.745052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.943 [2024-07-15 22:15:22.745131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.943 [2024-07-15 22:15:22.745149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.943 [2024-07-15 22:15:22.745161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.943 [2024-07-15 22:15:22.745172] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.943 [2024-07-15 22:15:22.745210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.943 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.943 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:20:35.943 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:35.943 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:35.943 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.943 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.943 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:20:35.943 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.943 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:35.943 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.943 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:20:35.943 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.943 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.201 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.201 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.201 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.202 [2024-07-15 22:15:22.915361] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.202 [2024-07-15 22:15:22.923467] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.202 null0 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.202 null1 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.202 null2 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.202 null3 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.202 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94008 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94008 /tmp/host.sock 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94008 ']' 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.202 22:15:22 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:36.202 [2024-07-15 22:15:23.047630] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:20:36.202 [2024-07-15 22:15:23.047756] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94008 ] 00:20:36.459 [2024-07-15 22:15:23.188449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.459 [2024-07-15 22:15:23.247615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.393 22:15:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.393 22:15:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:20:37.393 22:15:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:20:37.393 22:15:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:20:37.393 22:15:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:20:37.393 22:15:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94035 00:20:37.393 22:15:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:20:37.393 22:15:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:20:37.393 22:15:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:20:37.393 Process 987 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:20:37.393 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:20:37.393 Successfully dropped root privileges. 00:20:37.393 avahi-daemon 0.8 starting up. 00:20:37.393 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:20:37.393 Successfully called chroot(). 00:20:37.393 Successfully dropped remaining capabilities. 00:20:37.393 No service file found in /etc/avahi/services. 00:20:38.333 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:38.333 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:20:38.333 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:38.333 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:20:38.333 Network interface enumeration completed. 00:20:38.333 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:20:38.333 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:20:38.333 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:20:38.333 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:20:38.333 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 1886979121. 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:38.333 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.591 [2024-07-15 22:15:25.445583] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.591 [2024-07-15 22:15:25.484408] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.591 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.592 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.592 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:38.592 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.592 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.592 [2024-07-15 22:15:25.524372] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:38.592 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.592 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:38.592 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.592 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.592 [2024-07-15 22:15:25.532378] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:38.592 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.592 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:20:38.592 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.592 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.849 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.849 22:15:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:20:39.413 [2024-07-15 22:15:26.345578] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:40.348 [2024-07-15 22:15:26.945618] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:40.348 [2024-07-15 22:15:26.945670] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:40.348 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:40.348 cookie is 0 00:20:40.348 is_local: 1 00:20:40.348 our_own: 0 00:20:40.348 wide_area: 0 00:20:40.348 multicast: 1 00:20:40.348 cached: 1 00:20:40.348 [2024-07-15 22:15:27.045601] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:40.348 [2024-07-15 22:15:27.045651] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:40.348 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:40.348 cookie is 0 00:20:40.348 is_local: 1 00:20:40.348 our_own: 0 00:20:40.348 wide_area: 0 00:20:40.348 multicast: 1 00:20:40.348 cached: 1 00:20:40.348 [2024-07-15 22:15:27.045666] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:40.348 [2024-07-15 22:15:27.145601] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:40.348 [2024-07-15 22:15:27.145651] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:40.348 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:40.348 cookie is 0 00:20:40.348 is_local: 1 00:20:40.348 our_own: 0 00:20:40.348 wide_area: 0 00:20:40.348 multicast: 1 00:20:40.348 cached: 1 00:20:40.348 [2024-07-15 22:15:27.245601] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:40.348 [2024-07-15 22:15:27.245651] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:40.348 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:40.348 cookie is 0 00:20:40.348 is_local: 1 00:20:40.348 our_own: 0 00:20:40.348 wide_area: 0 00:20:40.348 multicast: 1 00:20:40.348 cached: 1 00:20:40.348 [2024-07-15 22:15:27.245668] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:20:41.282 [2024-07-15 22:15:27.950269] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:41.282 [2024-07-15 22:15:27.950314] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:41.282 [2024-07-15 22:15:27.950335] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:41.282 [2024-07-15 22:15:28.036435] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:20:41.282 [2024-07-15 22:15:28.093526] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:41.282 [2024-07-15 22:15:28.093576] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:41.282 [2024-07-15 22:15:28.150155] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:41.282 [2024-07-15 22:15:28.150203] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:41.282 [2024-07-15 22:15:28.150226] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:41.540 [2024-07-15 22:15:28.237339] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:20:41.540 [2024-07-15 22:15:28.301598] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:41.540 [2024-07-15 22:15:28.301649] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:44.215 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.216 22:15:30 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:20:45.151 22:15:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:20:45.151 22:15:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:45.151 22:15:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:45.151 22:15:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.151 22:15:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.151 22:15:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:45.151 22:15:31 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:45.151 22:15:31 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.151 [2024-07-15 22:15:32.067376] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:45.151 [2024-07-15 22:15:32.068544] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:45.151 [2024-07-15 22:15:32.068599] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:45.151 [2024-07-15 22:15:32.068639] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:45.151 [2024-07-15 22:15:32.068654] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.151 [2024-07-15 22:15:32.075327] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:45.151 [2024-07-15 22:15:32.075603] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:45.151 [2024-07-15 22:15:32.075726] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.151 22:15:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:20:45.409 [2024-07-15 22:15:32.205670] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:20:45.409 [2024-07-15 22:15:32.205948] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:20:45.409 [2024-07-15 22:15:32.264037] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:45.409 [2024-07-15 22:15:32.264110] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:45.409 [2024-07-15 22:15:32.264120] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:45.409 [2024-07-15 22:15:32.264152] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:45.409 [2024-07-15 22:15:32.264201] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:45.409 [2024-07-15 22:15:32.264211] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:45.409 [2024-07-15 22:15:32.264217] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:45.409 [2024-07-15 22:15:32.264234] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:45.409 [2024-07-15 22:15:32.309829] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:45.409 [2024-07-15 22:15:32.309886] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:45.409 [2024-07-15 22:15:32.309958] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:45.409 [2024-07-15 22:15:32.309972] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:46.343 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.603 [2024-07-15 22:15:33.400305] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:46.603 [2024-07-15 22:15:33.400349] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:46.603 [2024-07-15 22:15:33.400387] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:46.603 [2024-07-15 22:15:33.400402] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:46.603 [2024-07-15 22:15:33.402333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.603 [2024-07-15 22:15:33.402371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.603 [2024-07-15 22:15:33.402385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.603 [2024-07-15 22:15:33.402395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.603 [2024-07-15 22:15:33.402405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.603 [2024-07-15 22:15:33.402415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.603 [2024-07-15 22:15:33.402425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.603 [2024-07-15 22:15:33.402434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.603 [2024-07-15 22:15:33.402443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x776ab0 is same with the state(5) to be set 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.603 [2024-07-15 22:15:33.408293] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:46.603 [2024-07-15 22:15:33.408354] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:46.603 [2024-07-15 22:15:33.412310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776ab0 (9): Bad file descriptor 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.603 22:15:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:20:46.603 [2024-07-15 22:15:33.415303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.603 [2024-07-15 22:15:33.415333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.603 [2024-07-15 22:15:33.415346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.603 [2024-07-15 22:15:33.415355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.603 [2024-07-15 22:15:33.415366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.603 [2024-07-15 22:15:33.415375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.603 [2024-07-15 22:15:33.415385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.603 [2024-07-15 22:15:33.415394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.603 [2024-07-15 22:15:33.415404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7744b0 is same with the state(5) to be set 00:20:46.603 [2024-07-15 22:15:33.422330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:46.603 [2024-07-15 22:15:33.422449] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.603 [2024-07-15 22:15:33.422473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x776ab0 with addr=10.0.0.2, port=4420 00:20:46.604 [2024-07-15 22:15:33.422485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x776ab0 is same with the state(5) to be set 00:20:46.604 [2024-07-15 22:15:33.422503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776ab0 (9): Bad file descriptor 00:20:46.604 [2024-07-15 22:15:33.422519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:46.604 [2024-07-15 22:15:33.422528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:46.604 [2024-07-15 22:15:33.422540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:46.604 [2024-07-15 22:15:33.422556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.604 [2024-07-15 22:15:33.425271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7744b0 (9): Bad file descriptor 00:20:46.604 [2024-07-15 22:15:33.432390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:46.604 [2024-07-15 22:15:33.432476] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.604 [2024-07-15 22:15:33.432497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x776ab0 with addr=10.0.0.2, port=4420 00:20:46.604 [2024-07-15 22:15:33.432508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x776ab0 is same with the state(5) to be set 00:20:46.604 [2024-07-15 22:15:33.432524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776ab0 (9): Bad file descriptor 00:20:46.604 [2024-07-15 22:15:33.432539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:46.604 [2024-07-15 22:15:33.432549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:46.604 [2024-07-15 22:15:33.432558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:46.604 [2024-07-15 22:15:33.432573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.604 [2024-07-15 22:15:33.435296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:46.604 [2024-07-15 22:15:33.435373] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.604 [2024-07-15 22:15:33.435393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7744b0 with addr=10.0.0.3, port=4420 00:20:46.604 [2024-07-15 22:15:33.435404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7744b0 is same with the state(5) to be set 00:20:46.604 [2024-07-15 22:15:33.435419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7744b0 (9): Bad file descriptor 00:20:46.604 [2024-07-15 22:15:33.435434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:46.604 [2024-07-15 22:15:33.435442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:46.604 [2024-07-15 22:15:33.435452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:46.604 [2024-07-15 22:15:33.435467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.604 [2024-07-15 22:15:33.442439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:46.604 [2024-07-15 22:15:33.442517] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.604 [2024-07-15 22:15:33.442537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x776ab0 with addr=10.0.0.2, port=4420 00:20:46.604 [2024-07-15 22:15:33.442548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x776ab0 is same with the state(5) to be set 00:20:46.604 [2024-07-15 22:15:33.442564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776ab0 (9): Bad file descriptor 00:20:46.604 [2024-07-15 22:15:33.442578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:46.604 [2024-07-15 22:15:33.442587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:46.604 [2024-07-15 22:15:33.442597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:46.604 [2024-07-15 22:15:33.442611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.604 [2024-07-15 22:15:33.445342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:46.604 [2024-07-15 22:15:33.445419] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.604 [2024-07-15 22:15:33.445440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7744b0 with addr=10.0.0.3, port=4420 00:20:46.604 [2024-07-15 22:15:33.445450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7744b0 is same with the state(5) to be set 00:20:46.604 [2024-07-15 22:15:33.445466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7744b0 (9): Bad file descriptor 00:20:46.604 [2024-07-15 22:15:33.445480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:46.604 [2024-07-15 22:15:33.445489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:46.604 [2024-07-15 22:15:33.445498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:46.604 [2024-07-15 22:15:33.445513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.604 [2024-07-15 22:15:33.452487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:46.604 [2024-07-15 22:15:33.452566] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.604 [2024-07-15 22:15:33.452586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x776ab0 with addr=10.0.0.2, port=4420 00:20:46.604 [2024-07-15 22:15:33.452597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x776ab0 is same with the state(5) to be set 00:20:46.604 [2024-07-15 22:15:33.452613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776ab0 (9): Bad file descriptor 00:20:46.604 [2024-07-15 22:15:33.452627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:46.604 [2024-07-15 22:15:33.452637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:46.604 [2024-07-15 22:15:33.452646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:46.604 [2024-07-15 22:15:33.452660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.604 [2024-07-15 22:15:33.455392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:46.604 [2024-07-15 22:15:33.455479] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.604 [2024-07-15 22:15:33.455501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7744b0 with addr=10.0.0.3, port=4420 00:20:46.604 [2024-07-15 22:15:33.455512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7744b0 is same with the state(5) to be set 00:20:46.604 [2024-07-15 22:15:33.455528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7744b0 (9): Bad file descriptor 00:20:46.604 [2024-07-15 22:15:33.455551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:46.604 [2024-07-15 22:15:33.455561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:46.604 [2024-07-15 22:15:33.455571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:46.604 [2024-07-15 22:15:33.455586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.604 [2024-07-15 22:15:33.462536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:46.604 [2024-07-15 22:15:33.462613] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.604 [2024-07-15 22:15:33.462634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x776ab0 with addr=10.0.0.2, port=4420 00:20:46.604 [2024-07-15 22:15:33.462645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x776ab0 is same with the state(5) to be set 00:20:46.604 [2024-07-15 22:15:33.462661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776ab0 (9): Bad file descriptor 00:20:46.604 [2024-07-15 22:15:33.462675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:46.604 [2024-07-15 22:15:33.462684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:46.604 [2024-07-15 22:15:33.462693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:46.604 [2024-07-15 22:15:33.462708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.604 [2024-07-15 22:15:33.465444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:46.604 [2024-07-15 22:15:33.465520] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.604 [2024-07-15 22:15:33.465541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7744b0 with addr=10.0.0.3, port=4420 00:20:46.604 [2024-07-15 22:15:33.465551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7744b0 is same with the state(5) to be set 00:20:46.604 [2024-07-15 22:15:33.465566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7744b0 (9): Bad file descriptor 00:20:46.604 [2024-07-15 22:15:33.465581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:46.604 [2024-07-15 22:15:33.465590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:46.604 [2024-07-15 22:15:33.465599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:46.604 [2024-07-15 22:15:33.465613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.604 [2024-07-15 22:15:33.472582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:46.604 [2024-07-15 22:15:33.472657] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.604 [2024-07-15 22:15:33.472677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x776ab0 with addr=10.0.0.2, port=4420 00:20:46.604 [2024-07-15 22:15:33.472687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x776ab0 is same with the state(5) to be set 00:20:46.604 [2024-07-15 22:15:33.472703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776ab0 (9): Bad file descriptor 00:20:46.604 [2024-07-15 22:15:33.472717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:46.604 [2024-07-15 22:15:33.472726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:46.604 [2024-07-15 22:15:33.472736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:46.604 [2024-07-15 22:15:33.472750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.604 [2024-07-15 22:15:33.475490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:46.604 [2024-07-15 22:15:33.475565] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.604 [2024-07-15 22:15:33.475589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7744b0 with addr=10.0.0.3, port=4420 00:20:46.604 [2024-07-15 22:15:33.475600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7744b0 is same with the state(5) to be set 00:20:46.604 [2024-07-15 22:15:33.475616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7744b0 (9): Bad file descriptor 00:20:46.604 [2024-07-15 22:15:33.475638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:46.604 [2024-07-15 22:15:33.475648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:46.604 [2024-07-15 22:15:33.475657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:46.604 [2024-07-15 22:15:33.475672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.604 [2024-07-15 22:15:33.482628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:46.605 [2024-07-15 22:15:33.482706] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.605 [2024-07-15 22:15:33.482726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x776ab0 with addr=10.0.0.2, port=4420 00:20:46.605 [2024-07-15 22:15:33.482737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x776ab0 is same with the state(5) to be set 00:20:46.605 [2024-07-15 22:15:33.482753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776ab0 (9): Bad file descriptor 00:20:46.605 [2024-07-15 22:15:33.482767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:46.605 [2024-07-15 22:15:33.482776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:46.605 [2024-07-15 22:15:33.482785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:46.605 [2024-07-15 22:15:33.482800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.605 [2024-07-15 22:15:33.485537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:46.605 [2024-07-15 22:15:33.485612] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.605 [2024-07-15 22:15:33.485632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7744b0 with addr=10.0.0.3, port=4420 00:20:46.605 [2024-07-15 22:15:33.485643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7744b0 is same with the state(5) to be set 00:20:46.605 [2024-07-15 22:15:33.485658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7744b0 (9): Bad file descriptor 00:20:46.605 [2024-07-15 22:15:33.485672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:46.605 [2024-07-15 22:15:33.485681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:46.605 [2024-07-15 22:15:33.485690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:46.605 [2024-07-15 22:15:33.485705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.605 [2024-07-15 22:15:33.492677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:46.605 [2024-07-15 22:15:33.492752] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.605 [2024-07-15 22:15:33.492772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x776ab0 with addr=10.0.0.2, port=4420 00:20:46.605 [2024-07-15 22:15:33.492783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x776ab0 is same with the state(5) to be set 00:20:46.605 [2024-07-15 22:15:33.492798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776ab0 (9): Bad file descriptor 00:20:46.605 [2024-07-15 22:15:33.492812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:46.605 [2024-07-15 22:15:33.492822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:46.605 [2024-07-15 22:15:33.492831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:46.605 [2024-07-15 22:15:33.492845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.605 [2024-07-15 22:15:33.495584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:46.605 [2024-07-15 22:15:33.495669] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.605 [2024-07-15 22:15:33.495689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7744b0 with addr=10.0.0.3, port=4420 00:20:46.605 [2024-07-15 22:15:33.495700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7744b0 is same with the state(5) to be set 00:20:46.605 [2024-07-15 22:15:33.495735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7744b0 (9): Bad file descriptor 00:20:46.605 [2024-07-15 22:15:33.495751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:46.605 [2024-07-15 22:15:33.495760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:46.605 [2024-07-15 22:15:33.495769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:46.605 [2024-07-15 22:15:33.495784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.605 [2024-07-15 22:15:33.502726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:46.605 [2024-07-15 22:15:33.502810] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.605 [2024-07-15 22:15:33.502831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x776ab0 with addr=10.0.0.2, port=4420 00:20:46.605 [2024-07-15 22:15:33.502842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x776ab0 is same with the state(5) to be set 00:20:46.605 [2024-07-15 22:15:33.502858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776ab0 (9): Bad file descriptor 00:20:46.605 [2024-07-15 22:15:33.502872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:46.605 [2024-07-15 22:15:33.502882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:46.605 [2024-07-15 22:15:33.502891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:46.605 [2024-07-15 22:15:33.502906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.605 [2024-07-15 22:15:33.505637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:46.605 [2024-07-15 22:15:33.505715] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.605 [2024-07-15 22:15:33.505735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7744b0 with addr=10.0.0.3, port=4420 00:20:46.605 [2024-07-15 22:15:33.505745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7744b0 is same with the state(5) to be set 00:20:46.605 [2024-07-15 22:15:33.505760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7744b0 (9): Bad file descriptor 00:20:46.605 [2024-07-15 22:15:33.505775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:46.605 [2024-07-15 22:15:33.505784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:46.605 [2024-07-15 22:15:33.505793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:46.605 [2024-07-15 22:15:33.505807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.605 [2024-07-15 22:15:33.512777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:46.605 [2024-07-15 22:15:33.512853] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.605 [2024-07-15 22:15:33.512874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x776ab0 with addr=10.0.0.2, port=4420 00:20:46.605 [2024-07-15 22:15:33.512884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x776ab0 is same with the state(5) to be set 00:20:46.605 [2024-07-15 22:15:33.512899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776ab0 (9): Bad file descriptor 00:20:46.605 [2024-07-15 22:15:33.512913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:46.605 [2024-07-15 22:15:33.512923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:46.605 [2024-07-15 22:15:33.512932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:46.605 [2024-07-15 22:15:33.512947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.605 [2024-07-15 22:15:33.515684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:46.605 [2024-07-15 22:15:33.515757] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.605 [2024-07-15 22:15:33.515785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7744b0 with addr=10.0.0.3, port=4420 00:20:46.605 [2024-07-15 22:15:33.515796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7744b0 is same with the state(5) to be set 00:20:46.605 [2024-07-15 22:15:33.515810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7744b0 (9): Bad file descriptor 00:20:46.605 [2024-07-15 22:15:33.515824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:46.605 [2024-07-15 22:15:33.515833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:46.605 [2024-07-15 22:15:33.515842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:46.605 [2024-07-15 22:15:33.515857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.605 [2024-07-15 22:15:33.522824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:46.605 [2024-07-15 22:15:33.522928] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.605 [2024-07-15 22:15:33.522951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x776ab0 with addr=10.0.0.2, port=4420 00:20:46.605 [2024-07-15 22:15:33.522962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x776ab0 is same with the state(5) to be set 00:20:46.605 [2024-07-15 22:15:33.522978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776ab0 (9): Bad file descriptor 00:20:46.605 [2024-07-15 22:15:33.522993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:46.605 [2024-07-15 22:15:33.523003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:46.605 [2024-07-15 22:15:33.523012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:46.605 [2024-07-15 22:15:33.523027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.605 [2024-07-15 22:15:33.525731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:46.605 [2024-07-15 22:15:33.525808] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.605 [2024-07-15 22:15:33.525833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7744b0 with addr=10.0.0.3, port=4420 00:20:46.605 [2024-07-15 22:15:33.525844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7744b0 is same with the state(5) to be set 00:20:46.605 [2024-07-15 22:15:33.525859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7744b0 (9): Bad file descriptor 00:20:46.605 [2024-07-15 22:15:33.525874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:46.605 [2024-07-15 22:15:33.525883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:46.605 [2024-07-15 22:15:33.525893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:46.605 [2024-07-15 22:15:33.525911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.605 [2024-07-15 22:15:33.532892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:46.605 [2024-07-15 22:15:33.532974] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.605 [2024-07-15 22:15:33.532995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x776ab0 with addr=10.0.0.2, port=4420 00:20:46.605 [2024-07-15 22:15:33.533005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x776ab0 is same with the state(5) to be set 00:20:46.605 [2024-07-15 22:15:33.533021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776ab0 (9): Bad file descriptor 00:20:46.605 [2024-07-15 22:15:33.533036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:46.605 [2024-07-15 22:15:33.533045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:46.605 [2024-07-15 22:15:33.533055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:46.606 [2024-07-15 22:15:33.533070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.606 [2024-07-15 22:15:33.535778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:46.606 [2024-07-15 22:15:33.535854] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.606 [2024-07-15 22:15:33.535875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7744b0 with addr=10.0.0.3, port=4420 00:20:46.606 [2024-07-15 22:15:33.535885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7744b0 is same with the state(5) to be set 00:20:46.606 [2024-07-15 22:15:33.535901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7744b0 (9): Bad file descriptor 00:20:46.606 [2024-07-15 22:15:33.535916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:46.606 [2024-07-15 22:15:33.535925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:46.606 [2024-07-15 22:15:33.535934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:46.606 [2024-07-15 22:15:33.535948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.606 [2024-07-15 22:15:33.539395] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:20:46.606 [2024-07-15 22:15:33.539425] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:46.606 [2024-07-15 22:15:33.539452] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:46.606 [2024-07-15 22:15:33.539490] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:20:46.606 [2024-07-15 22:15:33.539507] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:46.606 [2024-07-15 22:15:33.539521] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:46.863 [2024-07-15 22:15:33.625510] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:46.863 [2024-07-15 22:15:33.625584] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:47.796 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.797 22:15:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:20:47.797 [2024-07-15 22:15:34.745578] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.183 [2024-07-15 22:15:35.956803] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:20:49.183 2024/07/15 22:15:35 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:49.183 request: 00:20:49.183 { 00:20:49.183 "method": "bdev_nvme_start_mdns_discovery", 00:20:49.183 "params": { 00:20:49.183 "name": "mdns", 00:20:49.183 "svcname": "_nvme-disc._http", 00:20:49.183 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:49.183 } 00:20:49.183 } 00:20:49.183 Got JSON-RPC error response 00:20:49.183 GoRPCClient: error on JSON-RPC call 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:49.183 22:15:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:20:49.777 [2024-07-15 22:15:36.545370] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:49.777 [2024-07-15 22:15:36.645357] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:50.033 [2024-07-15 22:15:36.745380] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:50.033 [2024-07-15 22:15:36.745430] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:50.033 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:50.033 cookie is 0 00:20:50.033 is_local: 1 00:20:50.033 our_own: 0 00:20:50.033 wide_area: 0 00:20:50.033 multicast: 1 00:20:50.033 cached: 1 00:20:50.033 [2024-07-15 22:15:36.845374] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:50.033 [2024-07-15 22:15:36.845422] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:50.033 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:50.033 cookie is 0 00:20:50.033 is_local: 1 00:20:50.033 our_own: 0 00:20:50.033 wide_area: 0 00:20:50.033 multicast: 1 00:20:50.033 cached: 1 00:20:50.033 [2024-07-15 22:15:36.845439] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:50.033 [2024-07-15 22:15:36.945384] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:50.033 [2024-07-15 22:15:36.945441] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:50.033 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:50.033 cookie is 0 00:20:50.033 is_local: 1 00:20:50.033 our_own: 0 00:20:50.033 wide_area: 0 00:20:50.033 multicast: 1 00:20:50.033 cached: 1 00:20:50.290 [2024-07-15 22:15:37.045380] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:50.290 [2024-07-15 22:15:37.045431] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:50.290 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:50.290 cookie is 0 00:20:50.290 is_local: 1 00:20:50.290 our_own: 0 00:20:50.290 wide_area: 0 00:20:50.290 multicast: 1 00:20:50.290 cached: 1 00:20:50.290 [2024-07-15 22:15:37.045451] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:20:50.853 [2024-07-15 22:15:37.752043] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:50.853 [2024-07-15 22:15:37.752094] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:50.853 [2024-07-15 22:15:37.752117] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:51.110 [2024-07-15 22:15:37.840189] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:20:51.110 [2024-07-15 22:15:37.907517] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:51.110 [2024-07-15 22:15:37.907565] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:51.110 [2024-07-15 22:15:37.952244] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:51.110 [2024-07-15 22:15:37.952305] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:51.110 [2024-07-15 22:15:37.952338] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:51.110 [2024-07-15 22:15:38.038392] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:20:51.367 [2024-07-15 22:15:38.098679] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:51.367 [2024-07-15 22:15:38.098742] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:54.692 22:15:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:20:54.692 22:15:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:54.692 22:15:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:54.692 22:15:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.692 22:15:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.692 22:15:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:54.692 22:15:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:54.692 22:15:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.692 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:20:54.692 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:20:54.692 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:54.692 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:54.692 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.692 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.692 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:54.692 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:54.692 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.693 [2024-07-15 22:15:41.153536] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:20:54.693 request: 00:20:54.693 { 00:20:54.693 "method": "bdev_nvme_start_mdns_discovery", 00:20:54.693 "params": { 00:20:54.693 "name": "cdc", 00:20:54.693 "svcname": "_nvme-disc._tcp", 00:20:54.693 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:54.693 } 00:20:54.693 } 00:20:54.693 Got JSON-RPC error response 00:20:54.693 GoRPCClient: error on JSON-RPC call 00:20:54.693 2024/07/15 22:15:41 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 94008 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 94008 00:20:54.693 [2024-07-15 22:15:41.370954] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 94035 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:20:54.693 Got SIGTERM, quitting. 00:20:54.693 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:54.693 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:54.693 avahi-daemon 0.8 exiting. 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:54.693 rmmod nvme_tcp 00:20:54.693 rmmod nvme_fabrics 00:20:54.693 rmmod nvme_keyring 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 93967 ']' 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 93967 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 93967 ']' 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 93967 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93967 00:20:54.693 killing process with pid 93967 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93967' 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 93967 00:20:54.693 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 93967 00:20:54.952 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:54.952 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:54.952 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:54.952 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:54.952 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:54.952 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.952 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.952 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.952 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:54.952 ************************************ 00:20:54.952 END TEST nvmf_mdns_discovery 00:20:54.952 ************************************ 00:20:54.952 00:20:54.952 real 0m19.774s 00:20:54.952 user 0m39.345s 00:20:54.952 sys 0m1.907s 00:20:54.952 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:54.952 22:15:41 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.952 22:15:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:54.952 22:15:41 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:20:54.952 22:15:41 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:54.952 22:15:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:54.952 22:15:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:54.952 22:15:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:54.952 ************************************ 00:20:54.952 START TEST nvmf_host_multipath 00:20:54.952 ************************************ 00:20:54.952 22:15:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:54.952 * Looking for test storage... 00:20:54.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:54.952 22:15:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:54.952 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.210 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:55.211 Cannot find device "nvmf_tgt_br" 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:55.211 Cannot find device "nvmf_tgt_br2" 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:55.211 Cannot find device "nvmf_tgt_br" 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:55.211 Cannot find device "nvmf_tgt_br2" 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:20:55.211 22:15:41 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:55.211 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:55.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:20:55.469 00:20:55.469 --- 10.0.0.2 ping statistics --- 00:20:55.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.469 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:55.469 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:55.469 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:55.469 00:20:55.469 --- 10.0.0.3 ping statistics --- 00:20:55.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.469 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:55.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:55.469 00:20:55.469 --- 10.0.0.1 ping statistics --- 00:20:55.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.469 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:55.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94585 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94585 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94585 ']' 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.469 22:15:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:55.469 [2024-07-15 22:15:42.327055] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:20:55.469 [2024-07-15 22:15:42.327166] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.727 [2024-07-15 22:15:42.469561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:55.727 [2024-07-15 22:15:42.560802] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.727 [2024-07-15 22:15:42.561132] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.727 [2024-07-15 22:15:42.561364] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.727 [2024-07-15 22:15:42.561673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.727 [2024-07-15 22:15:42.561832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.727 [2024-07-15 22:15:42.562116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.727 [2024-07-15 22:15:42.562124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.661 22:15:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.661 22:15:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:56.661 22:15:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:56.661 22:15:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:56.661 22:15:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:56.661 22:15:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.661 22:15:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94585 00:20:56.661 22:15:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:56.919 [2024-07-15 22:15:43.716174] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.919 22:15:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:57.177 Malloc0 00:20:57.177 22:15:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:57.436 22:15:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:57.693 22:15:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.950 [2024-07-15 22:15:44.726913] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.950 22:15:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:58.208 [2024-07-15 22:15:44.999047] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:58.208 22:15:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:58.208 22:15:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94694 00:20:58.208 22:15:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:58.208 22:15:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94694 /var/tmp/bdevperf.sock 00:20:58.208 22:15:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94694 ']' 00:20:58.208 22:15:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.208 22:15:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.208 22:15:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.208 22:15:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.208 22:15:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:58.466 22:15:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.466 22:15:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:58.467 22:15:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:58.724 22:15:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:59.291 Nvme0n1 00:20:59.291 22:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:59.565 Nvme0n1 00:20:59.565 22:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:59.565 22:15:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:00.518 22:15:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:00.518 22:15:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:00.775 22:15:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:01.340 22:15:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:01.340 22:15:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94769 00:21:01.340 22:15:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:01.340 22:15:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:07.894 Attaching 4 probes... 00:21:07.894 @path[10.0.0.2, 4421]: 16899 00:21:07.894 @path[10.0.0.2, 4421]: 17128 00:21:07.894 @path[10.0.0.2, 4421]: 16986 00:21:07.894 @path[10.0.0.2, 4421]: 16715 00:21:07.894 @path[10.0.0.2, 4421]: 16410 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94769 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:07.894 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:08.151 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:08.151 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94901 00:21:08.151 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:08.151 22:15:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:14.703 22:16:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:14.703 22:16:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:14.703 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:14.703 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:14.703 Attaching 4 probes... 00:21:14.703 @path[10.0.0.2, 4420]: 12858 00:21:14.703 @path[10.0.0.2, 4420]: 15471 00:21:14.703 @path[10.0.0.2, 4420]: 16026 00:21:14.703 @path[10.0.0.2, 4420]: 14605 00:21:14.703 @path[10.0.0.2, 4420]: 15360 00:21:14.703 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:14.703 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:14.703 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:14.703 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:14.703 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:14.703 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:14.703 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94901 00:21:14.703 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:14.703 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:14.703 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:14.703 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:14.998 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:14.998 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95036 00:21:14.998 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:14.998 22:16:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:21.560 22:16:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:21.560 22:16:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:21.560 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:21.560 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:21.560 Attaching 4 probes... 00:21:21.560 @path[10.0.0.2, 4421]: 14078 00:21:21.560 @path[10.0.0.2, 4421]: 16748 00:21:21.560 @path[10.0.0.2, 4421]: 13714 00:21:21.560 @path[10.0.0.2, 4421]: 15952 00:21:21.560 @path[10.0.0.2, 4421]: 16613 00:21:21.560 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:21.560 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:21.561 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:21.561 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:21.561 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:21.561 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:21.561 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95036 00:21:21.561 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:21.561 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:21.561 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:21.561 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:21.818 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:21.818 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95161 00:21:21.818 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:21.818 22:16:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:28.375 22:16:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:28.375 22:16:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:28.375 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:28.375 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:28.375 Attaching 4 probes... 00:21:28.375 00:21:28.375 00:21:28.375 00:21:28.375 00:21:28.375 00:21:28.375 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:28.375 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:28.375 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:28.375 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:28.375 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:28.375 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:28.375 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95161 00:21:28.375 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:28.375 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:28.375 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:28.633 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:28.889 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:28.889 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95297 00:21:28.889 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:28.889 22:16:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:35.492 22:16:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:35.492 22:16:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:35.492 22:16:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:35.492 22:16:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:35.492 Attaching 4 probes... 00:21:35.492 @path[10.0.0.2, 4421]: 16412 00:21:35.492 @path[10.0.0.2, 4421]: 16412 00:21:35.492 @path[10.0.0.2, 4421]: 16483 00:21:35.492 @path[10.0.0.2, 4421]: 16878 00:21:35.492 @path[10.0.0.2, 4421]: 16855 00:21:35.492 22:16:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:35.492 22:16:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:35.492 22:16:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:35.492 22:16:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:35.492 22:16:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:35.492 22:16:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:35.492 22:16:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95297 00:21:35.492 22:16:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:35.492 22:16:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:35.493 [2024-07-15 22:16:22.391272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391368] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391681] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.391997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.392009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.392021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.392032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.392044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.392056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 [2024-07-15 22:16:22.392068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb87440 is same with the state(5) to be set 00:21:35.493 22:16:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:36.869 22:16:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:36.869 22:16:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95434 00:21:36.869 22:16:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:36.869 22:16:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:43.443 22:16:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:43.443 22:16:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:43.443 22:16:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:43.443 22:16:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:43.443 Attaching 4 probes... 00:21:43.443 @path[10.0.0.2, 4420]: 15999 00:21:43.443 @path[10.0.0.2, 4420]: 16368 00:21:43.443 @path[10.0.0.2, 4420]: 16288 00:21:43.443 @path[10.0.0.2, 4420]: 16544 00:21:43.443 @path[10.0.0.2, 4420]: 16497 00:21:43.443 22:16:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:43.443 22:16:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:43.443 22:16:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:43.443 22:16:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:43.443 22:16:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:43.443 22:16:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:43.443 22:16:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95434 00:21:43.443 22:16:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:43.443 22:16:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:43.443 [2024-07-15 22:16:30.034391] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:43.443 22:16:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:43.443 22:16:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:50.050 22:16:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:50.050 22:16:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95623 00:21:50.050 22:16:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:50.050 22:16:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:56.619 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:56.619 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:56.619 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:56.619 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:56.619 Attaching 4 probes... 00:21:56.619 @path[10.0.0.2, 4421]: 14244 00:21:56.619 @path[10.0.0.2, 4421]: 15020 00:21:56.619 @path[10.0.0.2, 4421]: 16246 00:21:56.619 @path[10.0.0.2, 4421]: 16161 00:21:56.619 @path[10.0.0.2, 4421]: 15588 00:21:56.619 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:56.619 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:56.619 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:56.619 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:56.619 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95623 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94694 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94694 ']' 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94694 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94694 00:21:56.620 killing process with pid 94694 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94694' 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94694 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94694 00:21:56.620 Connection closed with partial response: 00:21:56.620 00:21:56.620 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94694 00:21:56.620 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:56.620 [2024-07-15 22:15:45.061723] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:21:56.620 [2024-07-15 22:15:45.061829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94694 ] 00:21:56.620 [2024-07-15 22:15:45.194181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.620 [2024-07-15 22:15:45.258010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.620 Running I/O for 90 seconds... 00:21:56.620 [2024-07-15 22:15:54.951892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.620 [2024-07-15 22:15:54.951966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.952935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.952967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.953006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.953026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.953048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.953064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.953118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.953135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.953157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.953172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.953195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.953210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.953232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.953247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.953269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.953284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.953306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.953321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.953343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.953358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.953380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.953396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.953418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.953433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.953455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.620 [2024-07-15 22:15:54.953470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.620 [2024-07-15 22:15:54.953493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.953510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.953532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.953547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.953576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.953592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.953614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.953630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.953651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.953666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.953688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.953703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.953727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.953752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.954585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.954615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.954645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.954662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.954685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.954702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.954724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.954739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.954761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.954776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.954798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.954813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.954835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.954850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.954872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.954898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.954931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.954957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.954982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.954998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.621 [2024-07-15 22:15:54.955630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.955668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.955707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.955744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.955781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.955818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.955855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.955900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.955957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.955989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.621 [2024-07-15 22:15:54.956013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:56.621 [2024-07-15 22:15:54.956036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.956977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.956993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.957015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.957031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.957792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.957828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.957866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.957889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.957919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.957942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.957971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.957998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.958029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.958051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.958098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.958123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.958148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.958164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.958186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.958201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.958224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.958239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.958261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.958289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.958322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.958338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.958360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.958375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.958397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.958416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.958439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.958454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.958476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.958491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.958513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.958528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.958551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.958567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:56.622 [2024-07-15 22:15:54.958590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.622 [2024-07-15 22:15:54.958605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.958628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.958643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.958665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.958680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.958702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.958718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.958740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.958755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.958784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.958800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.958822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.958847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.958869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.958885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.958907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.958935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.958978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.958995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.623 [2024-07-15 22:15:54.959170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.959946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.959972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.960001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.960026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.960057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.960097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.960131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.960153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.960187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.960210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.960239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.960260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.960301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.960326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.960355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.960376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.960405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.960426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.960454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.960482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.960511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.960532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.960563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.960587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.960611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.623 [2024-07-15 22:15:54.960627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:56.623 [2024-07-15 22:15:54.960649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.960676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.960700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.960716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.960738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.960756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.960779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.960794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.960817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.960833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.960855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.960870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.960892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.960909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.960944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.960963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.960985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.961001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.961024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.961039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.961928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.961963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.961998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.962021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.962073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.962169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.962221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.962280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.962330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.962387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.962427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.624 [2024-07-15 22:15:54.962466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.962503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.962540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.962578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.962615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.962653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.962710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.962747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.962800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.962849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.962898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.962956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.962989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.963010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.963039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.963060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.963106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.963131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.963160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.963182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.963211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.624 [2024-07-15 22:15:54.963232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.624 [2024-07-15 22:15:54.963261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.625 [2024-07-15 22:15:54.963281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.963952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.963973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.964002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.964023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.964052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.964073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.964131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.964149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.964172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.964190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.964212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.964228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.964250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.964265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.964305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.964324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.964346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.964363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.964395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.964421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.981844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.981928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.981982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.982011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.982051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.982125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.982168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.982189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.982219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.982239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.982269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.982289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.982320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.982340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.983409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.983448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.983488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.983510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.983540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.983561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.983591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.983612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.983641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.983661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.983710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.983731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.983761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.625 [2024-07-15 22:15:54.983781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:56.625 [2024-07-15 22:15:54.983810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.983830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.983859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.983879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.983921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.983950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.983989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.984964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.984993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.985013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.985041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.985061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.985109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.985142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.985174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.626 [2024-07-15 22:15:54.985194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.985224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.985244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.985274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.985293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.985322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.985343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.985372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.985392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.985421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.985442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.985471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.985490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.985519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.985539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.985568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.985587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.985616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.985645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:56.626 [2024-07-15 22:15:54.985676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.626 [2024-07-15 22:15:54.985705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.985744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.985771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.985821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.985849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.985888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.985917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.985961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.985989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.986955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.986983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.987030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.987067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.987144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.987183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.987234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.987271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.987326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.987361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.987409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.987442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.987490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.987547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.987596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.987632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.988845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.988884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.988922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.988944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.988975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.988996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.989025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.989051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.989105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.989142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.989175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.989196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.989226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.989245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.989277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.989311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.989344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.989365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.989395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.989425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.989464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.627 [2024-07-15 22:15:54.989499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.989548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.627 [2024-07-15 22:15:54.989581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.989620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.627 [2024-07-15 22:15:54.989651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.989694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.627 [2024-07-15 22:15:54.989722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:56.627 [2024-07-15 22:15:54.989760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.627 [2024-07-15 22:15:54.989787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.989825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.628 [2024-07-15 22:15:54.989853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.989890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.628 [2024-07-15 22:15:54.989917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.989955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.628 [2024-07-15 22:15:54.989982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.628 [2024-07-15 22:15:54.990056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.628 [2024-07-15 22:15:54.990151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.628 [2024-07-15 22:15:54.990204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.628 [2024-07-15 22:15:54.990253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.628 [2024-07-15 22:15:54.990302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.628 [2024-07-15 22:15:54.990366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.628 [2024-07-15 22:15:54.990420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.628 [2024-07-15 22:15:54.990469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.628 [2024-07-15 22:15:54.990518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.628 [2024-07-15 22:15:54.990567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.990616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.990665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.990746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.990817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.990890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.990955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.990995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.991097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.991177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.991257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.991324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.991375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.991428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.991479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.991562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.991617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.991666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.991715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.991765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.991823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:56.628 [2024-07-15 22:15:54.991889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.628 [2024-07-15 22:15:54.991910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.991939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.991959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.991988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.992008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.992037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.992058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.992116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.992145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.992176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.992196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.992225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.992245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.992291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.992318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.992350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.992371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.992409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.992429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.992460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.992480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.993492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.993552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.993604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.993639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.993678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.993699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.993726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.993744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.993771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.993790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.993817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.993835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.993862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.993880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.993906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.993924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.993951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.993969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.993996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.994963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.994992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.995019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.995055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.995104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.995139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.995158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.995185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.995204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.995231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.629 [2024-07-15 22:15:54.995249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:56.629 [2024-07-15 22:15:54.995276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.629 [2024-07-15 22:15:54.995294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.995970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.995988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.996959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.996988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.997006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.997033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.997052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.997078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.997115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.997143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.997162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.997190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.997208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.998265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.998310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.998353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.998380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.998415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.998442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.998482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.998502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.998529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.630 [2024-07-15 22:15:54.998547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:56.630 [2024-07-15 22:15:54.998574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:54.998592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.998633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:54.998652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.998679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:54.998697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.998723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:54.998759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.998820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:54.998848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.998875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:54.998894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.998921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:54.998940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.998967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.998985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.631 [2024-07-15 22:15:54.999843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:54.999908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:54.999943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:54.999979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.000015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.000040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.000097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.000127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.000156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.000175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.000201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.000220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.000247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.000265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.000307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.000326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.000354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.000382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.014966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.015040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.015106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.015139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.015182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.015210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.015250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.015299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.015346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.015381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.015450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.015483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.015530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.015563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.015608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.015641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.015683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.631 [2024-07-15 22:15:55.015715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:56.631 [2024-07-15 22:15:55.015757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.015787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.015826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.015859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.015903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.015936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.015983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.016017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.016058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.016112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.016159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.016193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.016234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.016265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.016338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.016375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.016439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.016472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.016517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.016550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.016597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.016630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.016676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.016708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.018260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.018310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.018393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.018433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.018480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.018516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.018558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.018592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.018639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.018672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.018716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.018748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.018794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.018827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.018874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.018908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.018951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.019001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.019050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.019102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.019156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.019191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.019232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.019263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.019304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.019338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.019386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.019418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.019463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.019497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.019543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.019576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.019623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.019654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.019698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.019731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.019778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.019810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.019854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.019887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.019930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.019982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.020029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.020062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.020126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.020163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.020209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.632 [2024-07-15 22:15:55.020241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:56.632 [2024-07-15 22:15:55.020297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.020332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.020374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.020408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.020454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.020486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.020533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.020565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.020612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.020645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.020691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.020722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.020764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.020797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.020852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.020885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.020931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.633 [2024-07-15 22:15:55.020963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.021028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.021062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.021132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.021170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.021215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.021247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.021291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.021325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.021366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.021401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.021451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.021484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.021527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.021558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.021604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.021636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.021682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.021716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.021759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.021792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.021838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.021870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.021914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.021949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.022018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.022053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.022118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.022156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.022203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.022235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.022280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.022313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.022359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.022392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.022436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.022471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.022515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.022549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.022595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.022625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.022668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.022701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.022747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.022780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.022823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.022856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.022901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.022935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.022982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.023029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.023072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.023133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.023180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.023213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.023259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.023292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.023334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.023365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.023411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.023445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.023488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.023519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.023565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.023597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.023642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.633 [2024-07-15 22:15:55.023672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.633 [2024-07-15 22:15:55.023719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:15:55.023746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:15:55.023787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:15:55.023815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:15:55.026012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:15:55.026062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.588442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.588523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.588562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.588598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.588634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.588670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.588706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.588742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.588778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.588821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.588856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.588892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.588928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.634 [2024-07-15 22:16:01.588976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.588998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.634 [2024-07-15 22:16:01.589014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.634 [2024-07-15 22:16:01.589051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.634 [2024-07-15 22:16:01.589103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.634 [2024-07-15 22:16:01.589142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.634 [2024-07-15 22:16:01.589179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.634 [2024-07-15 22:16:01.589216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.634 [2024-07-15 22:16:01.589252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.634 [2024-07-15 22:16:01.589936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:56.634 [2024-07-15 22:16:01.589958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.589973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.589995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.635 [2024-07-15 22:16:01.590010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.590862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.635 [2024-07-15 22:16:01.590903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.635 [2024-07-15 22:16:01.590943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.590969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.635 [2024-07-15 22:16:01.590984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.635 [2024-07-15 22:16:01.591032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.635 [2024-07-15 22:16:01.591073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.635 [2024-07-15 22:16:01.591142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.635 [2024-07-15 22:16:01.591184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.591972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.635 [2024-07-15 22:16:01.591987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:56.635 [2024-07-15 22:16:01.592014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.592970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.592997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.593012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.593039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.593054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.593091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.593115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.593143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.593159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.593186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.593201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.593227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.593242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.593269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.593284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.593311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.593326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.593361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.593377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.593404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.593419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.593446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.593461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.593487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.593502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.593529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.593544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.593571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.636 [2024-07-15 22:16:01.593586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:56.636 [2024-07-15 22:16:01.593612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:01.593628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:01.593654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:01.593669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:01.593695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:01.593710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:01.593737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:01.593752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:01.593779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:01.593795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:01.593822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:01.593837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:01.593869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:01.593885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:01.593911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:01.593926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:01.593953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:01.593968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:01.593995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:01.594010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:01.594042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:01.594058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.724749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.724831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.724894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.724915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.724939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.724955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.724978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.724994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.725031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.725068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.725122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.725199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.725332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.725378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.725416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.725454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.725492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.725531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.725569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.725607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.725647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.725700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.725741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.725790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.725830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.725867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.725905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.725942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.725965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.725980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.726003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.726018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.726040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.726055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.726078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.726119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.726143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.726159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.726182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.637 [2024-07-15 22:16:08.726197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.726221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.637 [2024-07-15 22:16:08.726240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:56.637 [2024-07-15 22:16:08.727498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.727530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.727578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.727596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.727622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.727637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.727663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.727679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.727705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.727720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.727746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.727762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.727787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.727802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.727829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.727845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.727870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.727885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.727910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.727926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.727951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.727971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.728983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.728998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.729024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.729039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.729067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.729094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.729125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.729142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.729169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.729184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.729211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.729227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.729254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.729269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.729296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.729318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.729347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.729362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.729390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.729405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.729432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.729447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.729474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.638 [2024-07-15 22:16:08.729489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:56.638 [2024-07-15 22:16:08.729516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-15 22:16:08.729532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.729559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-15 22:16:08.729574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.729601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-15 22:16:08.729616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.729643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.639 [2024-07-15 22:16:08.729658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.729686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.729701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.729728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.729743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.729770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.729786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.729813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.729828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.729862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.729878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.729905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.729920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.729947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.729970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.730971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.730999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.731014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.731042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.731057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.731100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.731118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.731146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.731162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.731189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.731204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.731232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.731259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.731299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.731318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.731345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.731360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.731388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.639 [2024-07-15 22:16:08.731403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:56.639 [2024-07-15 22:16:08.731430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:08.731446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:08.731480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:08.731498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.390885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.390988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.391069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.391122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.391162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.391188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.391223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.391248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.391282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.391307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.391344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.391368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.391401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.391427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.391461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.391486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.391520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.640 [2024-07-15 22:16:22.391545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.392298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.392357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.392405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.392438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.392492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.392524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.392562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.392604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.392640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.392666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.392701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.392727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.392761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.392778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.392802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.392824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.392854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.392879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.392913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.392940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.392975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.392997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.640 [2024-07-15 22:16:22.393855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:56.640 [2024-07-15 22:16:22.393895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.393915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.393944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.393964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.393994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.394964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.394985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.395013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.395033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.395061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.395108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.395143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.395161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.395183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.395198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.395220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.395235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.395258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.395272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.395294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.395309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.395330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.395345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.395367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.395381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.395403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.395418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.395439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.395454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.395477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.395491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.396142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.641 [2024-07-15 22:16:22.396170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.396191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-15 22:16:22.396217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.396240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-15 22:16:22.396255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.396271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-15 22:16:22.396303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.641 [2024-07-15 22:16:22.396322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.641 [2024-07-15 22:16:22.396336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.396979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.396999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.642 [2024-07-15 22:16:22.397848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.642 [2024-07-15 22:16:22.397871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-15 22:16:22.397896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.643 [2024-07-15 22:16:22.397918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-15 22:16:22.397933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.643 [2024-07-15 22:16:22.397971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-15 22:16:22.397991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.643 [2024-07-15 22:16:22.398008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-15 22:16:22.398022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.643 [2024-07-15 22:16:22.398038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-15 22:16:22.398052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.643 [2024-07-15 22:16:22.398068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-15 22:16:22.398109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.643 [2024-07-15 22:16:22.398127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-15 22:16:22.398141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.643 [2024-07-15 22:16:22.398157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-15 22:16:22.398172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.643 [2024-07-15 22:16:22.398188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:56.643 [2024-07-15 22:16:22.398205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.643 [2024-07-15 22:16:22.398220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb5500 is same with the state(5) to be set 00:21:56.643 [2024-07-15 22:16:22.398414] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xeb5500 was disconnected and freed. reset controller. 00:21:56.643 [2024-07-15 22:16:22.399924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:56.643 [2024-07-15 22:16:22.400021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:56.643 [2024-07-15 22:16:22.400044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.643 [2024-07-15 22:16:22.400077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10814d0 (9): Bad file descriptor 00:21:56.643 [2024-07-15 22:16:22.404255] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:56.643 [2024-07-15 22:16:22.404307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10814d0 with addr=10.0.0.2, port=4421 00:21:56.643 [2024-07-15 22:16:22.404327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10814d0 is same with the state(5) to be set 00:21:56.643 [2024-07-15 22:16:22.405027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10814d0 (9): Bad file descriptor 00:21:56.643 [2024-07-15 22:16:22.405313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:56.643 [2024-07-15 22:16:22.405339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:56.643 [2024-07-15 22:16:22.405355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:56.643 [2024-07-15 22:16:22.405574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.643 [2024-07-15 22:16:22.405599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:56.643 [2024-07-15 22:16:32.482908] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:56.643 Received shutdown signal, test time was about 56.133375 seconds 00:21:56.643 00:21:56.643 Latency(us) 00:21:56.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.643 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:56.643 Verification LBA range: start 0x0 length 0x4000 00:21:56.643 Nvme0n1 : 56.13 6893.44 26.93 0.00 0.00 18537.22 1817.13 7046430.72 00:21:56.643 =================================================================================================================== 00:21:56.643 Total : 6893.44 26.93 0.00 0.00 18537.22 1817.13 7046430.72 00:21:56.643 22:16:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:56.643 rmmod nvme_tcp 00:21:56.643 rmmod nvme_fabrics 00:21:56.643 rmmod nvme_keyring 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94585 ']' 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94585 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94585 ']' 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94585 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94585 00:21:56.643 killing process with pid 94585 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94585' 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94585 00:21:56.643 22:16:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94585 00:21:56.917 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:56.917 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:56.917 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:56.917 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:56.917 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:56.917 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.917 22:16:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.917 22:16:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.917 22:16:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:56.917 00:21:56.917 real 1m1.876s 00:21:56.917 user 2m55.858s 00:21:56.917 sys 0m13.634s 00:21:56.917 22:16:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:56.917 ************************************ 00:21:56.917 22:16:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:56.917 END TEST nvmf_host_multipath 00:21:56.917 ************************************ 00:21:56.917 22:16:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:56.917 22:16:43 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:56.917 22:16:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:56.917 22:16:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.917 22:16:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:56.917 ************************************ 00:21:56.917 START TEST nvmf_timeout 00:21:56.917 ************************************ 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:56.917 * Looking for test storage... 00:21:56.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:56.917 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:57.176 Cannot find device "nvmf_tgt_br" 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:57.176 Cannot find device "nvmf_tgt_br2" 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:57.176 Cannot find device "nvmf_tgt_br" 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:57.176 Cannot find device "nvmf_tgt_br2" 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:57.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:57.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:57.176 22:16:43 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:57.176 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:57.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:21:57.434 00:21:57.434 --- 10.0.0.2 ping statistics --- 00:21:57.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.434 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:57.434 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:57.434 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:21:57.434 00:21:57.434 --- 10.0.0.3 ping statistics --- 00:21:57.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.434 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:57.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:21:57.434 00:21:57.434 --- 10.0.0.1 ping statistics --- 00:21:57.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.434 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=95942 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 95942 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 95942 ']' 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.434 22:16:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:57.434 [2024-07-15 22:16:44.286185] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:21:57.434 [2024-07-15 22:16:44.286276] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.692 [2024-07-15 22:16:44.426938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:57.692 [2024-07-15 22:16:44.497780] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.692 [2024-07-15 22:16:44.497844] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.692 [2024-07-15 22:16:44.497857] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.692 [2024-07-15 22:16:44.497867] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.692 [2024-07-15 22:16:44.497875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.692 [2024-07-15 22:16:44.498024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.692 [2024-07-15 22:16:44.498447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.625 22:16:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.625 22:16:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:58.625 22:16:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:58.625 22:16:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:58.625 22:16:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:58.625 22:16:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.625 22:16:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.625 22:16:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:58.625 [2024-07-15 22:16:45.526356] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.625 22:16:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:58.883 Malloc0 00:21:58.883 22:16:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:59.141 22:16:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:59.399 22:16:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.657 [2024-07-15 22:16:46.605051] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.915 22:16:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96038 00:21:59.915 22:16:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:59.915 22:16:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96038 /var/tmp/bdevperf.sock 00:21:59.915 22:16:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96038 ']' 00:21:59.915 22:16:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.915 22:16:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.915 22:16:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.915 22:16:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.915 22:16:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:59.915 [2024-07-15 22:16:46.679853] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:21:59.915 [2024-07-15 22:16:46.679951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96038 ] 00:21:59.915 [2024-07-15 22:16:46.820191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.174 [2024-07-15 22:16:46.891545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.109 22:16:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.109 22:16:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:01.109 22:16:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:01.109 22:16:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:01.676 NVMe0n1 00:22:01.676 22:16:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96085 00:22:01.676 22:16:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.676 22:16:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:01.676 Running I/O for 10 seconds... 00:22:02.627 22:16:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.889 [2024-07-15 22:16:49.604396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604452] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604587] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604659] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.604962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x795900 is same with the state(5) to be set 00:22:02.889 [2024-07-15 22:16:49.605734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.889 [2024-07-15 22:16:49.605767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.889 [2024-07-15 22:16:49.605789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.889 [2024-07-15 22:16:49.605800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.889 [2024-07-15 22:16:49.605812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.889 [2024-07-15 22:16:49.605821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.889 [2024-07-15 22:16:49.605833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.889 [2024-07-15 22:16:49.605842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.889 [2024-07-15 22:16:49.605853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.889 [2024-07-15 22:16:49.605863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.889 [2024-07-15 22:16:49.605874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.889 [2024-07-15 22:16:49.605883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.889 [2024-07-15 22:16:49.605895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.889 [2024-07-15 22:16:49.605904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.889 [2024-07-15 22:16:49.605915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.889 [2024-07-15 22:16:49.605924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.605935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.605944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.605955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.605965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.605975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.605985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.605996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.890 [2024-07-15 22:16:49.606389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.890 [2024-07-15 22:16:49.606825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.890 [2024-07-15 22:16:49.606834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.606845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.606855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.606865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.606875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.606886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.606895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.606906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.606915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.606926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.606935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.606946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.606955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.606967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.606976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.606988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.606997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.891 [2024-07-15 22:16:49.607365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.891 [2024-07-15 22:16:49.607728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.891 [2024-07-15 22:16:49.607737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.607748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.607757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.607768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.607777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.607789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.607798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.607809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.607818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.607829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.607838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.607849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.607858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.607869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.607879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.607890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.607899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.607910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.607919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.607930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.607939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.607950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.607959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.607972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.607981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.607992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.892 [2024-07-15 22:16:49.608424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.892 [2024-07-15 22:16:49.608467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:22:02.892 [2024-07-15 22:16:49.608476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.892 [2024-07-15 22:16:49.608497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.892 [2024-07-15 22:16:49.608506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78920 len:8 PRP1 0x0 PRP2 0x0 00:22:02.892 [2024-07-15 22:16:49.608514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.892 [2024-07-15 22:16:49.608559] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8068d0 was disconnected and freed. reset controller. 00:22:02.892 [2024-07-15 22:16:49.608816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.892 [2024-07-15 22:16:49.608895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x799240 (9): Bad file descriptor 00:22:02.892 [2024-07-15 22:16:49.609007] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.892 [2024-07-15 22:16:49.609028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x799240 with addr=10.0.0.2, port=4420 00:22:02.892 [2024-07-15 22:16:49.609038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x799240 is same with the state(5) to be set 00:22:02.892 [2024-07-15 22:16:49.609057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x799240 (9): Bad file descriptor 00:22:02.892 [2024-07-15 22:16:49.609072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.892 [2024-07-15 22:16:49.609095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.892 [2024-07-15 22:16:49.609108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.892 [2024-07-15 22:16:49.609129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.892 [2024-07-15 22:16:49.609140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.892 22:16:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:04.796 [2024-07-15 22:16:51.609363] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.796 [2024-07-15 22:16:51.609417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x799240 with addr=10.0.0.2, port=4420 00:22:04.796 [2024-07-15 22:16:51.609434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x799240 is same with the state(5) to be set 00:22:04.796 [2024-07-15 22:16:51.609461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x799240 (9): Bad file descriptor 00:22:04.796 [2024-07-15 22:16:51.609493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.796 [2024-07-15 22:16:51.609504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.796 [2024-07-15 22:16:51.609515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.796 [2024-07-15 22:16:51.609542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.796 [2024-07-15 22:16:51.609553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.796 22:16:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:04.796 22:16:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:04.796 22:16:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:05.176 22:16:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:05.176 22:16:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:05.176 22:16:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:05.176 22:16:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:05.434 22:16:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:05.434 22:16:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:06.809 [2024-07-15 22:16:53.609840] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.809 [2024-07-15 22:16:53.609910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x799240 with addr=10.0.0.2, port=4420 00:22:06.809 [2024-07-15 22:16:53.609926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x799240 is same with the state(5) to be set 00:22:06.809 [2024-07-15 22:16:53.609954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x799240 (9): Bad file descriptor 00:22:06.809 [2024-07-15 22:16:53.609974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:06.809 [2024-07-15 22:16:53.609984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:06.809 [2024-07-15 22:16:53.609995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.809 [2024-07-15 22:16:53.610022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.809 [2024-07-15 22:16:53.610034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:08.705 [2024-07-15 22:16:55.610237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:08.705 [2024-07-15 22:16:55.610311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:08.705 [2024-07-15 22:16:55.610325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:08.705 [2024-07-15 22:16:55.610335] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:08.705 [2024-07-15 22:16:55.610363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.076 00:22:10.076 Latency(us) 00:22:10.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.076 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:10.076 Verification LBA range: start 0x0 length 0x4000 00:22:10.076 NVMe0n1 : 8.12 1198.94 4.68 15.76 0.00 105221.26 2189.50 7015926.69 00:22:10.076 =================================================================================================================== 00:22:10.076 Total : 1198.94 4.68 15.76 0.00 105221.26 2189.50 7015926.69 00:22:10.076 0 00:22:10.334 22:16:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:10.334 22:16:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:10.334 22:16:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:10.591 22:16:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:10.591 22:16:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:10.591 22:16:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:10.591 22:16:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:10.849 22:16:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:10.849 22:16:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96085 00:22:10.849 22:16:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96038 00:22:10.849 22:16:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96038 ']' 00:22:10.849 22:16:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96038 00:22:10.849 22:16:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:10.849 22:16:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.849 22:16:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96038 00:22:10.849 killing process with pid 96038 00:22:10.849 Received shutdown signal, test time was about 9.300788 seconds 00:22:10.849 00:22:10.849 Latency(us) 00:22:10.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.849 =================================================================================================================== 00:22:10.849 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:10.849 22:16:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:10.849 22:16:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:10.849 22:16:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96038' 00:22:10.849 22:16:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96038 00:22:10.849 22:16:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96038 00:22:11.107 22:16:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:11.365 [2024-07-15 22:16:58.169027] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.365 22:16:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96242 00:22:11.365 22:16:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:11.365 22:16:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96242 /var/tmp/bdevperf.sock 00:22:11.365 22:16:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96242 ']' 00:22:11.365 22:16:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.365 22:16:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.365 22:16:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.365 22:16:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.365 22:16:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:11.365 [2024-07-15 22:16:58.247484] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:22:11.365 [2024-07-15 22:16:58.247589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96242 ] 00:22:11.623 [2024-07-15 22:16:58.388467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.623 [2024-07-15 22:16:58.464024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.558 22:16:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.558 22:16:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:12.558 22:16:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:12.816 22:16:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:13.075 NVMe0n1 00:22:13.075 22:16:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:13.075 22:16:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96291 00:22:13.075 22:16:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:13.075 Running I/O for 10 seconds... 00:22:14.008 22:17:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.270 [2024-07-15 22:17:01.155847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.155900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.155912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.155920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.155931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.155939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.155947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.155956] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.155964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.155972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.155981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.155989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.155997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156141] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156157] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993b50 is same with the state(5) to be set 00:22:14.270 [2024-07-15 22:17:01.156489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.270 [2024-07-15 22:17:01.156527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.270 [2024-07-15 22:17:01.156564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.270 [2024-07-15 22:17:01.156585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.270 [2024-07-15 22:17:01.156606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.270 [2024-07-15 22:17:01.156627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.270 [2024-07-15 22:17:01.156647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.270 [2024-07-15 22:17:01.156668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.270 [2024-07-15 22:17:01.156688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.270 [2024-07-15 22:17:01.156710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.270 [2024-07-15 22:17:01.156730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.270 [2024-07-15 22:17:01.156750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.270 [2024-07-15 22:17:01.156771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.270 [2024-07-15 22:17:01.156792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.270 [2024-07-15 22:17:01.156812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.270 [2024-07-15 22:17:01.156833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.270 [2024-07-15 22:17:01.156854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.270 [2024-07-15 22:17:01.156865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.270 [2024-07-15 22:17:01.156875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.156886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.156896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.156907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.156916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.156928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.156937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.156948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.156957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.156968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.156977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.156989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.156998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.271 [2024-07-15 22:17:01.157775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.271 [2024-07-15 22:17:01.157784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.157795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.157804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.157815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.157825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.157836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.157847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.157858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.157867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.157879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.157890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.157902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.157911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.157922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.157931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.157942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.157952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.157963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.157972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.157983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.157992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.272 [2024-07-15 22:17:01.158661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.272 [2024-07-15 22:17:01.158670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.158681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.273 [2024-07-15 22:17:01.158690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.158701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.273 [2024-07-15 22:17:01.158711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.158722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.273 [2024-07-15 22:17:01.158731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.158742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.273 [2024-07-15 22:17:01.158752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.158763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.273 [2024-07-15 22:17:01.158772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.158784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.273 [2024-07-15 22:17:01.158793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.158823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.158835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82928 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.158844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.158857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.158866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.158881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82936 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.158889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.158899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.158906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.158916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82944 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.158925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.158935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.158942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.158950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82952 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.158959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.158969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.158977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.158985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82024 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.158994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.159003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.159011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.159019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82032 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.159033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.159042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.159050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.159058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82040 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.159066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.159075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.159095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.159104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82048 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.159113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.159122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.159130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.159138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82056 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.159147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.159156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.159164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.159171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82064 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.159180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.159190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.159197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.159207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82072 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.159216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.159225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.159233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.159241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82080 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.159250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.159259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.159266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.159274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82088 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.159283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.159292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.159299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.159307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82096 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.159317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.159327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.159334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.159342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82104 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.159351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.159361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.159368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.159376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82112 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.159384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.175405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.175490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.175517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82120 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.175542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.175564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.175580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.175598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82128 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.175619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.175639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.175656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.175675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82136 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.175694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.175715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.273 [2024-07-15 22:17:01.175732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.273 [2024-07-15 22:17:01.175749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82144 len:8 PRP1 0x0 PRP2 0x0 00:22:14.273 [2024-07-15 22:17:01.175768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.175864] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc3a8d0 was disconnected and freed. reset controller. 00:22:14.273 [2024-07-15 22:17:01.176152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.273 [2024-07-15 22:17:01.176207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.273 [2024-07-15 22:17:01.176241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.273 [2024-07-15 22:17:01.176262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.274 [2024-07-15 22:17:01.176311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.274 [2024-07-15 22:17:01.176333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.274 [2024-07-15 22:17:01.176357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.274 [2024-07-15 22:17:01.176376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.274 [2024-07-15 22:17:01.176397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcd240 is same with the state(5) to be set 00:22:14.274 [2024-07-15 22:17:01.176902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:14.274 [2024-07-15 22:17:01.176963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcd240 (9): Bad file descriptor 00:22:14.274 [2024-07-15 22:17:01.177192] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.274 [2024-07-15 22:17:01.177255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcd240 with addr=10.0.0.2, port=4420 00:22:14.274 [2024-07-15 22:17:01.177279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcd240 is same with the state(5) to be set 00:22:14.274 [2024-07-15 22:17:01.177318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcd240 (9): Bad file descriptor 00:22:14.274 [2024-07-15 22:17:01.177351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:14.274 [2024-07-15 22:17:01.177372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:14.274 [2024-07-15 22:17:01.177393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:14.274 [2024-07-15 22:17:01.177434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:14.274 [2024-07-15 22:17:01.177457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:14.274 22:17:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:15.648 [2024-07-15 22:17:02.177605] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.648 [2024-07-15 22:17:02.177666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcd240 with addr=10.0.0.2, port=4420 00:22:15.648 [2024-07-15 22:17:02.177683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcd240 is same with the state(5) to be set 00:22:15.648 [2024-07-15 22:17:02.177710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcd240 (9): Bad file descriptor 00:22:15.648 [2024-07-15 22:17:02.177729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:15.648 [2024-07-15 22:17:02.177739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:15.648 [2024-07-15 22:17:02.177750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:15.648 [2024-07-15 22:17:02.177776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.648 [2024-07-15 22:17:02.177788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:15.648 22:17:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.648 [2024-07-15 22:17:02.448877] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.648 22:17:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 96291 00:22:16.582 [2024-07-15 22:17:03.190748] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:23.146 00:22:23.146 Latency(us) 00:22:23.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.146 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:23.146 Verification LBA range: start 0x0 length 0x4000 00:22:23.146 NVMe0n1 : 10.00 6178.07 24.13 0.00 0.00 20676.96 919.74 3050402.91 00:22:23.146 =================================================================================================================== 00:22:23.146 Total : 6178.07 24.13 0.00 0.00 20676.96 919.74 3050402.91 00:22:23.146 0 00:22:23.146 22:17:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:23.146 22:17:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96409 00:22:23.146 22:17:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:23.404 Running I/O for 10 seconds... 00:22:24.342 22:17:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:24.342 [2024-07-15 22:17:11.276239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ec660 is same with the state(5) to be set 00:22:24.342 [2024-07-15 22:17:11.276307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ec660 is same with the state(5) to be set 00:22:24.342 [2024-07-15 22:17:11.276320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ec660 is same with the state(5) to be set 00:22:24.342 [2024-07-15 22:17:11.276328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ec660 is same with the state(5) to be set 00:22:24.342 [2024-07-15 22:17:11.276337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ec660 is same with the state(5) to be set 00:22:24.342 [2024-07-15 22:17:11.276345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ec660 is same with the state(5) to be set 00:22:24.342 [2024-07-15 22:17:11.276755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.276798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.276821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.276833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.276846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.276856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.276867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.276877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.276888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.276898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.276909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.276918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.276929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.276939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.276953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.276968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.276980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.276989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.342 [2024-07-15 22:17:11.277525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.342 [2024-07-15 22:17:11.277890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.342 [2024-07-15 22:17:11.277900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.277911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.277920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.277931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.277941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.277952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.277961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.277972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.277982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.277993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.343 [2024-07-15 22:17:11.278521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.278988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.278999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.279009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.279020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.279029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.279041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.279050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.279062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.279071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.279094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.279106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.279117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.279126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.279138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.279148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.279159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.279172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.343 [2024-07-15 22:17:11.279186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.343 [2024-07-15 22:17:11.279196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.344 [2024-07-15 22:17:11.279218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.344 [2024-07-15 22:17:11.279239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.344 [2024-07-15 22:17:11.279260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.344 [2024-07-15 22:17:11.279280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.344 [2024-07-15 22:17:11.279302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.344 [2024-07-15 22:17:11.279323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.344 [2024-07-15 22:17:11.279343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.344 [2024-07-15 22:17:11.279364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:24.344 [2024-07-15 22:17:11.279384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.344 [2024-07-15 22:17:11.279404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.344 [2024-07-15 22:17:11.279424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.344 [2024-07-15 22:17:11.279445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.344 [2024-07-15 22:17:11.279465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.344 [2024-07-15 22:17:11.279486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:24.344 [2024-07-15 22:17:11.279506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:24.344 [2024-07-15 22:17:11.279548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:24.344 [2024-07-15 22:17:11.279557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82944 len:8 PRP1 0x0 PRP2 0x0 00:22:24.344 [2024-07-15 22:17:11.279568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.344 [2024-07-15 22:17:11.279613] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc4cda0 was disconnected and freed. reset controller. 00:22:24.344 [2024-07-15 22:17:11.279862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:24.344 [2024-07-15 22:17:11.279959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcd240 (9): Bad file descriptor 00:22:24.344 [2024-07-15 22:17:11.280074] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.344 [2024-07-15 22:17:11.280112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcd240 with addr=10.0.0.2, port=4420 00:22:24.344 [2024-07-15 22:17:11.280123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcd240 is same with the state(5) to be set 00:22:24.344 [2024-07-15 22:17:11.280141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcd240 (9): Bad file descriptor 00:22:24.344 [2024-07-15 22:17:11.280161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:24.344 [2024-07-15 22:17:11.280171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:24.344 [2024-07-15 22:17:11.280182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:24.344 [2024-07-15 22:17:11.280201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:24.344 [2024-07-15 22:17:11.280212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:24.603 22:17:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:25.537 [2024-07-15 22:17:12.280375] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.537 [2024-07-15 22:17:12.280458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcd240 with addr=10.0.0.2, port=4420 00:22:25.537 [2024-07-15 22:17:12.280476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcd240 is same with the state(5) to be set 00:22:25.537 [2024-07-15 22:17:12.280506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcd240 (9): Bad file descriptor 00:22:25.537 [2024-07-15 22:17:12.280526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:25.537 [2024-07-15 22:17:12.280537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:25.537 [2024-07-15 22:17:12.280548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:25.537 [2024-07-15 22:17:12.280575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:25.537 [2024-07-15 22:17:12.280587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:26.472 [2024-07-15 22:17:13.280733] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.472 [2024-07-15 22:17:13.280811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcd240 with addr=10.0.0.2, port=4420 00:22:26.472 [2024-07-15 22:17:13.280827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcd240 is same with the state(5) to be set 00:22:26.472 [2024-07-15 22:17:13.280856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcd240 (9): Bad file descriptor 00:22:26.472 [2024-07-15 22:17:13.280876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:26.472 [2024-07-15 22:17:13.280886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:26.472 [2024-07-15 22:17:13.280896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.472 [2024-07-15 22:17:13.280922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:26.472 [2024-07-15 22:17:13.280935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:27.404 [2024-07-15 22:17:14.284687] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.404 [2024-07-15 22:17:14.284780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcd240 with addr=10.0.0.2, port=4420 00:22:27.404 [2024-07-15 22:17:14.284798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcd240 is same with the state(5) to be set 00:22:27.404 [2024-07-15 22:17:14.285064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcd240 (9): Bad file descriptor 00:22:27.404 [2024-07-15 22:17:14.285350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:27.404 [2024-07-15 22:17:14.285377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:27.404 [2024-07-15 22:17:14.285389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:27.404 [2024-07-15 22:17:14.289409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:27.404 [2024-07-15 22:17:14.289458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:27.404 22:17:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.662 [2024-07-15 22:17:14.582123] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.662 22:17:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 96409 00:22:28.662 [2024-07-15 22:17:15.329137] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:33.922 00:22:33.922 Latency(us) 00:22:33.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.922 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:33.922 Verification LBA range: start 0x0 length 0x4000 00:22:33.922 NVMe0n1 : 10.01 5330.77 20.82 3488.93 0.00 14486.10 904.84 3019898.88 00:22:33.922 =================================================================================================================== 00:22:33.922 Total : 5330.77 20.82 3488.93 0.00 14486.10 0.00 3019898.88 00:22:33.922 0 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96242 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96242 ']' 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96242 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96242 00:22:33.922 killing process with pid 96242 00:22:33.922 Received shutdown signal, test time was about 10.000000 seconds 00:22:33.922 00:22:33.922 Latency(us) 00:22:33.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.922 =================================================================================================================== 00:22:33.922 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96242' 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96242 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96242 00:22:33.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96530 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96530 /var/tmp/bdevperf.sock 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96530 ']' 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.922 22:17:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:33.922 [2024-07-15 22:17:20.374723] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:22:33.922 [2024-07-15 22:17:20.375543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96530 ] 00:22:33.922 [2024-07-15 22:17:20.515315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.922 [2024-07-15 22:17:20.576430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.488 22:17:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.488 22:17:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:34.488 22:17:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96557 00:22:34.488 22:17:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96530 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:34.488 22:17:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:34.748 22:17:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:35.314 NVMe0n1 00:22:35.314 22:17:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96606 00:22:35.314 22:17:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:35.314 22:17:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:35.314 Running I/O for 10 seconds... 00:22:36.248 22:17:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:36.508 [2024-07-15 22:17:23.306251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.508 [2024-07-15 22:17:23.306889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.306898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.306906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.306914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.306922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.306930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.306938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.306946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.306954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.306962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.306970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.306978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.306986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.306994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307018] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307096] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ef870 is same with the state(5) to be set 00:22:36.509 [2024-07-15 22:17:23.307743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.307788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.307812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.307824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.307836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.307846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.307858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.307868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.307880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.307890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.307902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.307911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.307923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.307933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.307945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.307955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.307968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.307977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.307989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.307999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.509 [2024-07-15 22:17:23.308670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.509 [2024-07-15 22:17:23.308679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.308701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.308723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.308744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.308766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.308788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.308809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.308830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.308851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.308872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.308896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.308918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.308940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.308962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.308983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.308995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.309984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.309994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.310006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.310015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.310028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.510 [2024-07-15 22:17:23.310037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.510 [2024-07-15 22:17:23.310050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.511 [2024-07-15 22:17:23.310059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.511 [2024-07-15 22:17:23.310096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.511 [2024-07-15 22:17:23.310121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.511 [2024-07-15 22:17:23.310143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.511 [2024-07-15 22:17:23.310164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.511 [2024-07-15 22:17:23.310186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.511 [2024-07-15 22:17:23.310207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.511 [2024-07-15 22:17:23.310229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.511 [2024-07-15 22:17:23.310252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.511 [2024-07-15 22:17:23.310273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.511 [2024-07-15 22:17:23.310294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.511 [2024-07-15 22:17:23.310316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.511 [2024-07-15 22:17:23.310339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.310401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79432 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.310411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.511 [2024-07-15 22:17:23.310434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.310443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89344 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.310452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.511 [2024-07-15 22:17:23.310470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.310481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121736 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.310490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.511 [2024-07-15 22:17:23.310508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.310516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71320 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.310526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.511 [2024-07-15 22:17:23.310544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.310552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46168 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.310561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.511 [2024-07-15 22:17:23.310579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.310588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79216 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.310597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.511 [2024-07-15 22:17:23.310615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.310624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58536 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.310633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.511 [2024-07-15 22:17:23.310652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.310662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76272 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.310672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.511 [2024-07-15 22:17:23.310693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.310702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17808 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.310713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.511 [2024-07-15 22:17:23.310732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.310741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53480 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.310752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.511 [2024-07-15 22:17:23.310771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.310782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121240 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.310792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.511 [2024-07-15 22:17:23.310811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.310821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82912 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.310831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.511 [2024-07-15 22:17:23.310850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.310860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63880 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.310870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.310880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.511 [2024-07-15 22:17:23.310889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.330608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25064 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.330672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.330702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:36.511 [2024-07-15 22:17:23.330715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:36.511 [2024-07-15 22:17:23.330728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47272 len:8 PRP1 0x0 PRP2 0x0 00:22:36.511 [2024-07-15 22:17:23.330742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.330816] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dd18d0 was disconnected and freed. reset controller. 00:22:36.511 [2024-07-15 22:17:23.331035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.511 [2024-07-15 22:17:23.331060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.331079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.511 [2024-07-15 22:17:23.331122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.331137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.511 [2024-07-15 22:17:23.331150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.331165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.511 [2024-07-15 22:17:23.331178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.511 [2024-07-15 22:17:23.331192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64240 is same with the state(5) to be set 00:22:36.511 [2024-07-15 22:17:23.331568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:36.511 [2024-07-15 22:17:23.331610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d64240 (9): Bad file descriptor 00:22:36.511 [2024-07-15 22:17:23.331761] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.511 [2024-07-15 22:17:23.331792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d64240 with addr=10.0.0.2, port=4420 00:22:36.511 [2024-07-15 22:17:23.331809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64240 is same with the state(5) to be set 00:22:36.511 [2024-07-15 22:17:23.331835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d64240 (9): Bad file descriptor 00:22:36.511 [2024-07-15 22:17:23.331858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:36.511 [2024-07-15 22:17:23.331872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:36.511 [2024-07-15 22:17:23.331899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:36.511 22:17:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 96606 00:22:36.511 [2024-07-15 22:17:23.331927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.511 [2024-07-15 22:17:23.331944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:38.411 [2024-07-15 22:17:25.332161] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.411 [2024-07-15 22:17:25.332243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d64240 with addr=10.0.0.2, port=4420 00:22:38.411 [2024-07-15 22:17:25.332262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64240 is same with the state(5) to be set 00:22:38.411 [2024-07-15 22:17:25.332307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d64240 (9): Bad file descriptor 00:22:38.411 [2024-07-15 22:17:25.332331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:38.411 [2024-07-15 22:17:25.332343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:38.411 [2024-07-15 22:17:25.332357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:38.411 [2024-07-15 22:17:25.332388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.411 [2024-07-15 22:17:25.332401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:40.943 [2024-07-15 22:17:27.332608] posix.c: 977:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.943 [2024-07-15 22:17:27.332684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d64240 with addr=10.0.0.2, port=4420 00:22:40.943 [2024-07-15 22:17:27.332703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d64240 is same with the state(5) to be set 00:22:40.943 [2024-07-15 22:17:27.332733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d64240 (9): Bad file descriptor 00:22:40.943 [2024-07-15 22:17:27.332754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:40.943 [2024-07-15 22:17:27.332764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:40.943 [2024-07-15 22:17:27.332778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:40.943 [2024-07-15 22:17:27.332806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:40.943 [2024-07-15 22:17:27.332818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:42.449 [2024-07-15 22:17:29.332949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:42.449 [2024-07-15 22:17:29.333037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:42.449 [2024-07-15 22:17:29.333052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:42.450 [2024-07-15 22:17:29.333064] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:42.450 [2024-07-15 22:17:29.333108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:43.826 00:22:43.826 Latency(us) 00:22:43.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.826 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:43.826 NVMe0n1 : 8.19 2526.16 9.87 15.63 0.00 50375.93 2532.07 7046430.72 00:22:43.826 =================================================================================================================== 00:22:43.826 Total : 2526.16 9.87 15.63 0.00 50375.93 2532.07 7046430.72 00:22:43.826 0 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:43.826 Attaching 5 probes... 00:22:43.826 1365.191836: reset bdev controller NVMe0 00:22:43.826 1365.299124: reconnect bdev controller NVMe0 00:22:43.826 3365.621162: reconnect delay bdev controller NVMe0 00:22:43.826 3365.647798: reconnect bdev controller NVMe0 00:22:43.826 5366.094363: reconnect delay bdev controller NVMe0 00:22:43.826 5366.119728: reconnect bdev controller NVMe0 00:22:43.826 7366.547403: reconnect delay bdev controller NVMe0 00:22:43.826 7366.578357: reconnect bdev controller NVMe0 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 96557 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96530 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96530 ']' 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96530 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96530 00:22:43.826 killing process with pid 96530 00:22:43.826 Received shutdown signal, test time was about 8.242777 seconds 00:22:43.826 00:22:43.826 Latency(us) 00:22:43.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.826 =================================================================================================================== 00:22:43.826 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96530' 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96530 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96530 00:22:43.826 22:17:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:44.085 rmmod nvme_tcp 00:22:44.085 rmmod nvme_fabrics 00:22:44.085 rmmod nvme_keyring 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 95942 ']' 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 95942 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 95942 ']' 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 95942 00:22:44.085 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:44.086 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:44.086 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95942 00:22:44.086 killing process with pid 95942 00:22:44.086 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:44.086 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:44.086 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95942' 00:22:44.086 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 95942 00:22:44.086 22:17:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 95942 00:22:44.345 22:17:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:44.345 22:17:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:44.345 22:17:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:44.345 22:17:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:44.345 22:17:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:44.345 22:17:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.345 22:17:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.345 22:17:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.345 22:17:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:44.345 00:22:44.345 real 0m47.386s 00:22:44.345 user 2m20.554s 00:22:44.345 sys 0m4.757s 00:22:44.345 22:17:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:44.345 ************************************ 00:22:44.345 END TEST nvmf_timeout 00:22:44.345 ************************************ 00:22:44.345 22:17:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.345 22:17:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:44.345 22:17:31 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:22:44.345 22:17:31 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:22:44.345 22:17:31 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:44.345 22:17:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.345 22:17:31 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:22:44.345 00:22:44.345 real 15m49.354s 00:22:44.345 user 42m30.605s 00:22:44.345 sys 3m23.516s 00:22:44.345 22:17:31 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:44.345 ************************************ 00:22:44.345 END TEST nvmf_tcp 00:22:44.345 ************************************ 00:22:44.345 22:17:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.345 22:17:31 -- common/autotest_common.sh@1142 -- # return 0 00:22:44.345 22:17:31 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:22:44.345 22:17:31 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:44.345 22:17:31 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:44.345 22:17:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.345 22:17:31 -- common/autotest_common.sh@10 -- # set +x 00:22:44.345 ************************************ 00:22:44.345 START TEST spdkcli_nvmf_tcp 00:22:44.345 ************************************ 00:22:44.345 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:44.605 * Looking for test storage... 00:22:44.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=96824 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 96824 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 96824 ']' 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.605 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.605 [2024-07-15 22:17:31.419162] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:22:44.605 [2024-07-15 22:17:31.419272] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96824 ] 00:22:44.864 [2024-07-15 22:17:31.554367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:44.864 [2024-07-15 22:17:31.625107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.864 [2024-07-15 22:17:31.625117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.864 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.864 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:22:44.864 22:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:22:44.864 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:44.864 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.864 22:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:22:44.864 22:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:22:44.864 22:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:22:44.864 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:44.864 22:17:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.864 22:17:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:44.864 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:44.864 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:22:44.864 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:22:44.864 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:22:44.864 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:22:44.864 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:22:44.864 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:44.864 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:44.864 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:22:44.864 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:22:44.864 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:22:44.864 ' 00:22:48.143 [2024-07-15 22:17:34.448607] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.075 [2024-07-15 22:17:35.757702] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:22:51.604 [2024-07-15 22:17:38.135226] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:22:53.505 [2024-07-15 22:17:40.228909] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:22:54.881 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:54.881 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:54.881 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:54.881 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:54.881 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:54.881 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:54.881 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:54.881 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:54.881 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:54.881 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:54.881 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:54.881 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:55.139 22:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:55.139 22:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.139 22:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.139 22:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:55.139 22:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.139 22:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.139 22:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:22:55.139 22:17:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:22:55.705 22:17:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:55.705 22:17:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:55.705 22:17:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:55.705 22:17:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.705 22:17:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.705 22:17:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:55.705 22:17:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.705 22:17:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.705 22:17:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:55.705 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:55.705 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:55.705 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:55.705 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:22:55.705 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:22:55.705 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:55.705 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:55.705 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:55.706 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:55.706 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:55.706 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:55.706 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:55.706 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:55.706 ' 00:23:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:23:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:23:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:23:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:23:00.968 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:23:00.968 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:23:00.968 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:00.968 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:23:00.968 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:23:00.968 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:23:00.968 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:23:00.968 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:23:00.968 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:23:00.968 22:17:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:23:00.968 22:17:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:00.968 22:17:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:00.968 22:17:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 96824 00:23:00.968 22:17:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96824 ']' 00:23:00.968 22:17:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96824 00:23:00.968 22:17:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:23:00.968 22:17:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:00.968 22:17:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96824 00:23:00.968 22:17:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:00.968 22:17:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:00.968 killing process with pid 96824 00:23:00.968 22:17:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96824' 00:23:00.968 22:17:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 96824 00:23:00.968 22:17:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 96824 00:23:01.226 22:17:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:23:01.226 22:17:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:23:01.226 22:17:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 96824 ']' 00:23:01.226 22:17:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 96824 00:23:01.226 22:17:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96824 ']' 00:23:01.226 Process with pid 96824 is not found 00:23:01.226 22:17:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96824 00:23:01.226 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (96824) - No such process 00:23:01.226 22:17:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 96824 is not found' 00:23:01.226 22:17:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:23:01.226 22:17:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:23:01.226 22:17:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:23:01.226 00:23:01.226 real 0m16.818s 00:23:01.226 user 0m36.567s 00:23:01.226 sys 0m0.842s 00:23:01.226 22:17:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:01.226 22:17:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:01.226 ************************************ 00:23:01.226 END TEST spdkcli_nvmf_tcp 00:23:01.226 ************************************ 00:23:01.226 22:17:48 -- common/autotest_common.sh@1142 -- # return 0 00:23:01.226 22:17:48 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:23:01.226 22:17:48 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:01.226 22:17:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:01.226 22:17:48 -- common/autotest_common.sh@10 -- # set +x 00:23:01.226 ************************************ 00:23:01.226 START TEST nvmf_identify_passthru 00:23:01.226 ************************************ 00:23:01.226 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:23:01.484 * Looking for test storage... 00:23:01.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:01.484 22:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.484 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:01.485 22:17:48 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.485 22:17:48 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.485 22:17:48 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.485 22:17:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.485 22:17:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.485 22:17:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.485 22:17:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:23:01.485 22:17:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:01.485 22:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:01.485 22:17:48 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.485 22:17:48 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.485 22:17:48 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.485 22:17:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.485 22:17:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.485 22:17:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.485 22:17:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:23:01.485 22:17:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.485 22:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.485 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:01.485 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:01.485 Cannot find device "nvmf_tgt_br" 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:01.485 Cannot find device "nvmf_tgt_br2" 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:01.485 Cannot find device "nvmf_tgt_br" 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:01.485 Cannot find device "nvmf_tgt_br2" 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:01.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:01.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:01.485 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:01.743 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:01.743 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:01.743 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:01.743 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:01.743 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:01.743 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:01.743 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:01.743 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:01.743 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:01.743 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:01.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:23:01.744 00:23:01.744 --- 10.0.0.2 ping statistics --- 00:23:01.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.744 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:01.744 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:01.744 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:23:01.744 00:23:01.744 --- 10.0.0.3 ping statistics --- 00:23:01.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.744 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:01.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:23:01.744 00:23:01.744 --- 10.0.0.1 ping statistics --- 00:23:01.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.744 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.744 22:17:48 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.744 22:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:23:01.744 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.744 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:01.744 22:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:23:01.744 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:23:01.744 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:23:01.744 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:23:01.744 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:23:01.744 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:23:01.744 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:23:01.744 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:01.744 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:01.744 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:23:01.744 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:23:01.744 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:01.744 22:17:48 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:23:01.744 22:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:23:01.744 22:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:23:01.744 22:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:23:01.744 22:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:23:01.744 22:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:23:02.002 22:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:23:02.002 22:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:23:02.002 22:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:23:02.002 22:17:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:23:02.260 22:17:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:23:02.260 22:17:49 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:23:02.260 22:17:49 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:02.260 22:17:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:02.260 22:17:49 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:23:02.260 22:17:49 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:02.260 22:17:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:02.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.260 22:17:49 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=97304 00:23:02.260 22:17:49 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.260 22:17:49 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:02.260 22:17:49 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 97304 00:23:02.260 22:17:49 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 97304 ']' 00:23:02.260 22:17:49 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.260 22:17:49 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.260 22:17:49 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.260 22:17:49 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.260 22:17:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:02.260 [2024-07-15 22:17:49.145993] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:23:02.260 [2024-07-15 22:17:49.146320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.518 [2024-07-15 22:17:49.283908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:02.518 [2024-07-15 22:17:49.356321] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.518 [2024-07-15 22:17:49.356593] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.518 [2024-07-15 22:17:49.356864] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.518 [2024-07-15 22:17:49.357046] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.518 [2024-07-15 22:17:49.357181] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.518 [2024-07-15 22:17:49.357312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.518 [2024-07-15 22:17:49.357350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.518 [2024-07-15 22:17:49.358003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.518 [2024-07-15 22:17:49.358042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.451 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.451 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:23:03.451 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:23:03.451 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.451 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.451 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.451 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:23:03.451 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.451 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.451 [2024-07-15 22:17:50.195250] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.452 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.452 [2024-07-15 22:17:50.208925] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.452 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.452 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.452 Nvme0n1 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.452 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.452 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.452 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.452 [2024-07-15 22:17:50.346921] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.452 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.452 [ 00:23:03.452 { 00:23:03.452 "allow_any_host": true, 00:23:03.452 "hosts": [], 00:23:03.452 "listen_addresses": [], 00:23:03.452 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:03.452 "subtype": "Discovery" 00:23:03.452 }, 00:23:03.452 { 00:23:03.452 "allow_any_host": true, 00:23:03.452 "hosts": [], 00:23:03.452 "listen_addresses": [ 00:23:03.452 { 00:23:03.452 "adrfam": "IPv4", 00:23:03.452 "traddr": "10.0.0.2", 00:23:03.452 "trsvcid": "4420", 00:23:03.452 "trtype": "TCP" 00:23:03.452 } 00:23:03.452 ], 00:23:03.452 "max_cntlid": 65519, 00:23:03.452 "max_namespaces": 1, 00:23:03.452 "min_cntlid": 1, 00:23:03.452 "model_number": "SPDK bdev Controller", 00:23:03.452 "namespaces": [ 00:23:03.452 { 00:23:03.452 "bdev_name": "Nvme0n1", 00:23:03.452 "name": "Nvme0n1", 00:23:03.452 "nguid": "BC08E803841F438AA955A3D28E578152", 00:23:03.452 "nsid": 1, 00:23:03.452 "uuid": "bc08e803-841f-438a-a955-a3d28e578152" 00:23:03.452 } 00:23:03.452 ], 00:23:03.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.452 "serial_number": "SPDK00000000000001", 00:23:03.452 "subtype": "NVMe" 00:23:03.452 } 00:23:03.452 ] 00:23:03.452 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.452 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:03.452 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:23:03.452 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:23:03.711 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:23:03.711 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:03.711 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:23:03.711 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:23:03.968 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:23:03.968 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:23:03.968 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:23:03.968 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.968 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.968 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.968 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.968 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:23:03.968 22:17:50 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:23:03.968 22:17:50 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:03.968 22:17:50 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:23:03.968 22:17:50 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:03.968 22:17:50 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:23:03.968 22:17:50 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.968 22:17:50 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:03.968 rmmod nvme_tcp 00:23:03.968 rmmod nvme_fabrics 00:23:04.226 rmmod nvme_keyring 00:23:04.226 22:17:50 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:04.226 22:17:50 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:23:04.226 22:17:50 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:23:04.226 22:17:50 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 97304 ']' 00:23:04.226 22:17:50 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 97304 00:23:04.226 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 97304 ']' 00:23:04.226 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 97304 00:23:04.226 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:23:04.226 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.226 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97304 00:23:04.226 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:04.226 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:04.226 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97304' 00:23:04.226 killing process with pid 97304 00:23:04.226 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 97304 00:23:04.226 22:17:50 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 97304 00:23:04.226 22:17:51 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:04.226 22:17:51 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:04.226 22:17:51 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:04.226 22:17:51 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.226 22:17:51 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.226 22:17:51 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.226 22:17:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:04.226 22:17:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.485 22:17:51 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:04.485 00:23:04.485 real 0m3.063s 00:23:04.485 user 0m7.634s 00:23:04.485 sys 0m0.749s 00:23:04.485 22:17:51 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:04.485 22:17:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:04.485 ************************************ 00:23:04.485 END TEST nvmf_identify_passthru 00:23:04.485 ************************************ 00:23:04.485 22:17:51 -- common/autotest_common.sh@1142 -- # return 0 00:23:04.485 22:17:51 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:04.485 22:17:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:04.485 22:17:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:04.485 22:17:51 -- common/autotest_common.sh@10 -- # set +x 00:23:04.485 ************************************ 00:23:04.485 START TEST nvmf_dif 00:23:04.485 ************************************ 00:23:04.485 22:17:51 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:04.485 * Looking for test storage... 00:23:04.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:04.485 22:17:51 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:04.485 22:17:51 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:04.485 22:17:51 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.485 22:17:51 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.485 22:17:51 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.485 22:17:51 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.485 22:17:51 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.485 22:17:51 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.485 22:17:51 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.485 22:17:51 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:04.486 22:17:51 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.486 22:17:51 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.486 22:17:51 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.486 22:17:51 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.486 22:17:51 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.486 22:17:51 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.486 22:17:51 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:04.486 22:17:51 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:04.486 22:17:51 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:04.486 22:17:51 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:04.486 22:17:51 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:04.486 22:17:51 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:04.486 22:17:51 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.486 22:17:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:04.486 22:17:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:04.486 Cannot find device "nvmf_tgt_br" 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@155 -- # true 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:04.486 Cannot find device "nvmf_tgt_br2" 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@156 -- # true 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:04.486 Cannot find device "nvmf_tgt_br" 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@158 -- # true 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:04.486 Cannot find device "nvmf_tgt_br2" 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@159 -- # true 00:23:04.486 22:17:51 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:04.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:04.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:04.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:23:04.757 00:23:04.757 --- 10.0.0.2 ping statistics --- 00:23:04.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.757 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:04.757 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:04.757 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:23:04.757 00:23:04.757 --- 10.0.0.3 ping statistics --- 00:23:04.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.757 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:04.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:04.757 00:23:04.757 --- 10.0.0.1 ping statistics --- 00:23:04.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.757 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:04.757 22:17:51 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:05.322 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:05.322 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:05.322 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:05.322 22:17:52 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.322 22:17:52 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:05.322 22:17:52 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:05.322 22:17:52 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.322 22:17:52 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:05.322 22:17:52 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:05.322 22:17:52 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:05.322 22:17:52 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:05.322 22:17:52 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:05.322 22:17:52 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:05.322 22:17:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:05.322 22:17:52 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97648 00:23:05.322 22:17:52 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97648 00:23:05.322 22:17:52 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:05.322 22:17:52 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 97648 ']' 00:23:05.322 22:17:52 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.322 22:17:52 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.322 22:17:52 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.322 22:17:52 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.322 22:17:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:05.322 [2024-07-15 22:17:52.144115] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:23:05.322 [2024-07-15 22:17:52.144210] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.581 [2024-07-15 22:17:52.277045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.581 [2024-07-15 22:17:52.346726] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.581 [2024-07-15 22:17:52.346786] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.581 [2024-07-15 22:17:52.346809] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.581 [2024-07-15 22:17:52.346825] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.581 [2024-07-15 22:17:52.346838] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.581 [2024-07-15 22:17:52.346885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.581 22:17:52 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.581 22:17:52 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:23:05.581 22:17:52 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:05.581 22:17:52 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:05.581 22:17:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:05.581 22:17:52 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.581 22:17:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:05.581 22:17:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:05.581 22:17:52 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.581 22:17:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:05.581 [2024-07-15 22:17:52.481788] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.581 22:17:52 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.581 22:17:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:05.581 22:17:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:05.581 22:17:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.581 22:17:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:05.581 ************************************ 00:23:05.581 START TEST fio_dif_1_default 00:23:05.581 ************************************ 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:05.581 bdev_null0 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:05.581 [2024-07-15 22:17:52.521885] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.581 { 00:23:05.581 "params": { 00:23:05.581 "name": "Nvme$subsystem", 00:23:05.581 "trtype": "$TEST_TRANSPORT", 00:23:05.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.581 "adrfam": "ipv4", 00:23:05.581 "trsvcid": "$NVMF_PORT", 00:23:05.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.581 "hdgst": ${hdgst:-false}, 00:23:05.581 "ddgst": ${ddgst:-false} 00:23:05.581 }, 00:23:05.581 "method": "bdev_nvme_attach_controller" 00:23:05.581 } 00:23:05.581 EOF 00:23:05.581 )") 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:05.581 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:05.839 "params": { 00:23:05.839 "name": "Nvme0", 00:23:05.839 "trtype": "tcp", 00:23:05.839 "traddr": "10.0.0.2", 00:23:05.839 "adrfam": "ipv4", 00:23:05.839 "trsvcid": "4420", 00:23:05.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:05.839 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:05.839 "hdgst": false, 00:23:05.839 "ddgst": false 00:23:05.839 }, 00:23:05.839 "method": "bdev_nvme_attach_controller" 00:23:05.839 }' 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:05.839 22:17:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:05.839 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:05.839 fio-3.35 00:23:05.839 Starting 1 thread 00:23:18.077 00:23:18.077 filename0: (groupid=0, jobs=1): err= 0: pid=97719: Mon Jul 15 22:18:03 2024 00:23:18.077 read: IOPS=1954, BW=7820KiB/s (8007kB/s)(76.6MiB/10026msec) 00:23:18.077 slat (nsec): min=7719, max=49622, avg=8994.37, stdev=2795.93 00:23:18.077 clat (usec): min=456, max=42011, avg=2019.26, stdev=7690.01 00:23:18.077 lat (usec): min=464, max=42023, avg=2028.25, stdev=7690.07 00:23:18.077 clat percentiles (usec): 00:23:18.077 | 1.00th=[ 461], 5.00th=[ 465], 10.00th=[ 474], 20.00th=[ 478], 00:23:18.077 | 30.00th=[ 486], 40.00th=[ 490], 50.00th=[ 494], 60.00th=[ 502], 00:23:18.077 | 70.00th=[ 506], 80.00th=[ 523], 90.00th=[ 578], 95.00th=[ 652], 00:23:18.077 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:23:18.077 | 99.99th=[42206] 00:23:18.077 bw ( KiB/s): min= 1536, max=16992, per=100.00%, avg=7838.40, stdev=3841.89, samples=20 00:23:18.077 iops : min= 384, max= 4248, avg=1959.60, stdev=960.47, samples=20 00:23:18.077 lat (usec) : 500=59.83%, 750=36.35%, 1000=0.06% 00:23:18.077 lat (msec) : 4=0.02%, 50=3.73% 00:23:18.077 cpu : usr=90.38%, sys=8.54%, ctx=16, majf=0, minf=9 00:23:18.077 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:18.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.077 issued rwts: total=19600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.077 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:18.077 00:23:18.077 Run status group 0 (all jobs): 00:23:18.077 READ: bw=7820KiB/s (8007kB/s), 7820KiB/s-7820KiB/s (8007kB/s-8007kB/s), io=76.6MiB (80.3MB), run=10026-10026msec 00:23:18.077 22:18:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:18.077 22:18:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:18.077 22:18:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.078 00:23:18.078 real 0m10.923s 00:23:18.078 user 0m9.647s 00:23:18.078 sys 0m1.078s 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:18.078 ************************************ 00:23:18.078 END TEST fio_dif_1_default 00:23:18.078 ************************************ 00:23:18.078 22:18:03 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:18.078 22:18:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:18.078 22:18:03 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:18.078 22:18:03 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.078 22:18:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:18.078 ************************************ 00:23:18.078 START TEST fio_dif_1_multi_subsystems 00:23:18.078 ************************************ 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:18.078 bdev_null0 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:18.078 [2024-07-15 22:18:03.496874] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:18.078 bdev_null1 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.078 { 00:23:18.078 "params": { 00:23:18.078 "name": "Nvme$subsystem", 00:23:18.078 "trtype": "$TEST_TRANSPORT", 00:23:18.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.078 "adrfam": "ipv4", 00:23:18.078 "trsvcid": "$NVMF_PORT", 00:23:18.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.078 "hdgst": ${hdgst:-false}, 00:23:18.078 "ddgst": ${ddgst:-false} 00:23:18.078 }, 00:23:18.078 "method": "bdev_nvme_attach_controller" 00:23:18.078 } 00:23:18.078 EOF 00:23:18.078 )") 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.078 { 00:23:18.078 "params": { 00:23:18.078 "name": "Nvme$subsystem", 00:23:18.078 "trtype": "$TEST_TRANSPORT", 00:23:18.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.078 "adrfam": "ipv4", 00:23:18.078 "trsvcid": "$NVMF_PORT", 00:23:18.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.078 "hdgst": ${hdgst:-false}, 00:23:18.078 "ddgst": ${ddgst:-false} 00:23:18.078 }, 00:23:18.078 "method": "bdev_nvme_attach_controller" 00:23:18.078 } 00:23:18.078 EOF 00:23:18.078 )") 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:23:18.078 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:18.078 "params": { 00:23:18.078 "name": "Nvme0", 00:23:18.078 "trtype": "tcp", 00:23:18.078 "traddr": "10.0.0.2", 00:23:18.078 "adrfam": "ipv4", 00:23:18.078 "trsvcid": "4420", 00:23:18.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:18.078 "hdgst": false, 00:23:18.078 "ddgst": false 00:23:18.078 }, 00:23:18.078 "method": "bdev_nvme_attach_controller" 00:23:18.078 },{ 00:23:18.078 "params": { 00:23:18.078 "name": "Nvme1", 00:23:18.079 "trtype": "tcp", 00:23:18.079 "traddr": "10.0.0.2", 00:23:18.079 "adrfam": "ipv4", 00:23:18.079 "trsvcid": "4420", 00:23:18.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.079 "hdgst": false, 00:23:18.079 "ddgst": false 00:23:18.079 }, 00:23:18.079 "method": "bdev_nvme_attach_controller" 00:23:18.079 }' 00:23:18.079 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:18.079 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:18.079 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.079 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:18.079 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:18.079 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:18.079 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:18.079 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:18.079 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:18.079 22:18:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:18.079 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:18.079 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:18.079 fio-3.35 00:23:18.079 Starting 2 threads 00:23:28.097 00:23:28.097 filename0: (groupid=0, jobs=1): err= 0: pid=97878: Mon Jul 15 22:18:14 2024 00:23:28.097 read: IOPS=222, BW=892KiB/s (913kB/s)(8928KiB/10012msec) 00:23:28.097 slat (nsec): min=4979, max=72083, avg=9937.56, stdev=4881.12 00:23:28.097 clat (usec): min=454, max=41713, avg=17910.44, stdev=20067.84 00:23:28.097 lat (usec): min=462, max=41724, avg=17920.37, stdev=20068.10 00:23:28.097 clat percentiles (usec): 00:23:28.097 | 1.00th=[ 469], 5.00th=[ 482], 10.00th=[ 490], 20.00th=[ 498], 00:23:28.097 | 30.00th=[ 510], 40.00th=[ 529], 50.00th=[ 578], 60.00th=[40633], 00:23:28.097 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:23:28.097 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:23:28.097 | 99.99th=[41681] 00:23:28.097 bw ( KiB/s): min= 544, max= 1408, per=50.23%, avg=891.10, stdev=242.56, samples=20 00:23:28.097 iops : min= 136, max= 352, avg=222.75, stdev=60.67, samples=20 00:23:28.097 lat (usec) : 500=21.33%, 750=34.23%, 1000=1.39% 00:23:28.097 lat (msec) : 2=0.04%, 10=0.18%, 50=42.83% 00:23:28.097 cpu : usr=95.20%, sys=4.38%, ctx=10, majf=0, minf=0 00:23:28.097 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.097 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.097 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:28.097 filename1: (groupid=0, jobs=1): err= 0: pid=97879: Mon Jul 15 22:18:14 2024 00:23:28.097 read: IOPS=220, BW=882KiB/s (903kB/s)(8832KiB/10012msec) 00:23:28.097 slat (nsec): min=7267, max=61077, avg=10524.86, stdev=5799.87 00:23:28.097 clat (usec): min=449, max=41920, avg=18102.82, stdev=20104.88 00:23:28.097 lat (usec): min=457, max=41939, avg=18113.35, stdev=20105.00 00:23:28.097 clat percentiles (usec): 00:23:28.097 | 1.00th=[ 469], 5.00th=[ 478], 10.00th=[ 490], 20.00th=[ 498], 00:23:28.097 | 30.00th=[ 515], 40.00th=[ 529], 50.00th=[ 570], 60.00th=[40633], 00:23:28.097 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:23:28.097 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:23:28.097 | 99.99th=[41681] 00:23:28.097 bw ( KiB/s): min= 544, max= 1536, per=49.67%, avg=881.65, stdev=261.46, samples=20 00:23:28.097 iops : min= 136, max= 384, avg=220.40, stdev=65.38, samples=20 00:23:28.097 lat (usec) : 500=20.47%, 750=35.19%, 1000=0.86% 00:23:28.097 lat (msec) : 10=0.18%, 50=43.30% 00:23:28.097 cpu : usr=95.14%, sys=4.40%, ctx=16, majf=0, minf=9 00:23:28.097 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:28.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.097 issued rwts: total=2208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.097 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:28.097 00:23:28.097 Run status group 0 (all jobs): 00:23:28.097 READ: bw=1774KiB/s (1816kB/s), 882KiB/s-892KiB/s (903kB/s-913kB/s), io=17.3MiB (18.2MB), run=10012-10012msec 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.097 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:28.098 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.098 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:28.098 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.098 00:23:28.098 real 0m11.055s 00:23:28.098 user 0m19.782s 00:23:28.098 sys 0m1.107s 00:23:28.098 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:28.098 22:18:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:28.098 ************************************ 00:23:28.098 END TEST fio_dif_1_multi_subsystems 00:23:28.098 ************************************ 00:23:28.098 22:18:14 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:28.098 22:18:14 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:28.098 22:18:14 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:28.098 22:18:14 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:28.098 22:18:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:28.098 ************************************ 00:23:28.098 START TEST fio_dif_rand_params 00:23:28.098 ************************************ 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:28.098 bdev_null0 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:28.098 [2024-07-15 22:18:14.610620] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.098 { 00:23:28.098 "params": { 00:23:28.098 "name": "Nvme$subsystem", 00:23:28.098 "trtype": "$TEST_TRANSPORT", 00:23:28.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.098 "adrfam": "ipv4", 00:23:28.098 "trsvcid": "$NVMF_PORT", 00:23:28.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.098 "hdgst": ${hdgst:-false}, 00:23:28.098 "ddgst": ${ddgst:-false} 00:23:28.098 }, 00:23:28.098 "method": "bdev_nvme_attach_controller" 00:23:28.098 } 00:23:28.098 EOF 00:23:28.098 )") 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:28.098 "params": { 00:23:28.098 "name": "Nvme0", 00:23:28.098 "trtype": "tcp", 00:23:28.098 "traddr": "10.0.0.2", 00:23:28.098 "adrfam": "ipv4", 00:23:28.098 "trsvcid": "4420", 00:23:28.098 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:28.098 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:28.098 "hdgst": false, 00:23:28.098 "ddgst": false 00:23:28.098 }, 00:23:28.098 "method": "bdev_nvme_attach_controller" 00:23:28.098 }' 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:28.098 22:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:28.098 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:28.098 ... 00:23:28.098 fio-3.35 00:23:28.098 Starting 3 threads 00:23:34.662 00:23:34.662 filename0: (groupid=0, jobs=1): err= 0: pid=98030: Mon Jul 15 22:18:20 2024 00:23:34.662 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(149MiB/5010msec) 00:23:34.662 slat (nsec): min=7474, max=34745, avg=11516.00, stdev=3789.16 00:23:34.662 clat (usec): min=6384, max=55399, avg=12554.10, stdev=3871.66 00:23:34.662 lat (usec): min=6395, max=55406, avg=12565.61, stdev=3871.94 00:23:34.662 clat percentiles (usec): 00:23:34.662 | 1.00th=[ 6915], 5.00th=[ 8094], 10.00th=[10945], 20.00th=[11469], 00:23:34.662 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:23:34.662 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13960], 00:23:34.662 | 99.00th=[19006], 99.50th=[53740], 99.90th=[54789], 99.95th=[55313], 00:23:34.662 | 99.99th=[55313] 00:23:34.662 bw ( KiB/s): min=27136, max=36937, per=34.28%, avg=30522.50, stdev=2643.70, samples=10 00:23:34.662 iops : min= 212, max= 288, avg=238.40, stdev=20.50, samples=10 00:23:34.662 lat (msec) : 10=6.69%, 20=92.47%, 50=0.17%, 100=0.67% 00:23:34.662 cpu : usr=92.45%, sys=6.19%, ctx=5, majf=0, minf=9 00:23:34.662 IO depths : 1=11.5%, 2=88.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.662 issued rwts: total=1195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.662 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:34.662 filename0: (groupid=0, jobs=1): err= 0: pid=98031: Mon Jul 15 22:18:20 2024 00:23:34.662 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(125MiB/5003msec) 00:23:34.662 slat (nsec): min=4776, max=42256, avg=11324.60, stdev=4409.48 00:23:34.662 clat (usec): min=3265, max=24735, avg=15026.58, stdev=2079.25 00:23:34.662 lat (usec): min=3276, max=24745, avg=15037.91, stdev=2078.92 00:23:34.662 clat percentiles (usec): 00:23:34.662 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[12518], 20.00th=[14484], 00:23:34.662 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15270], 60.00th=[15664], 00:23:34.662 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:23:34.662 | 99.00th=[19792], 99.50th=[22414], 99.90th=[24773], 99.95th=[24773], 00:23:34.662 | 99.99th=[24773] 00:23:34.662 bw ( KiB/s): min=23808, max=29184, per=28.61%, avg=25472.00, stdev=1574.63, samples=10 00:23:34.662 iops : min= 186, max= 228, avg=199.00, stdev=12.30, samples=10 00:23:34.662 lat (msec) : 4=0.10%, 10=6.02%, 20=93.08%, 50=0.80% 00:23:34.662 cpu : usr=92.54%, sys=6.00%, ctx=14, majf=0, minf=9 00:23:34.662 IO depths : 1=29.6%, 2=70.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.662 issued rwts: total=997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.662 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:34.662 filename0: (groupid=0, jobs=1): err= 0: pid=98032: Mon Jul 15 22:18:20 2024 00:23:34.662 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(162MiB/5006msec) 00:23:34.662 slat (nsec): min=5012, max=72971, avg=12716.60, stdev=3909.65 00:23:34.662 clat (usec): min=6613, max=54201, avg=11596.22, stdev=4577.41 00:23:34.662 lat (usec): min=6625, max=54210, avg=11608.94, stdev=4577.75 00:23:34.662 clat percentiles (usec): 00:23:34.662 | 1.00th=[ 7439], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10421], 00:23:34.662 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:23:34.662 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12125], 95.00th=[12649], 00:23:34.662 | 99.00th=[51643], 99.50th=[52691], 99.90th=[53216], 99.95th=[54264], 00:23:34.662 | 99.99th=[54264] 00:23:34.662 bw ( KiB/s): min=22272, max=35584, per=37.09%, avg=33024.00, stdev=3901.14, samples=10 00:23:34.662 iops : min= 174, max= 278, avg=258.00, stdev=30.48, samples=10 00:23:34.662 lat (msec) : 10=10.05%, 20=88.55%, 50=0.23%, 100=1.16% 00:23:34.662 cpu : usr=92.47%, sys=6.03%, ctx=14, majf=0, minf=9 00:23:34.662 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.662 issued rwts: total=1293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.662 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:34.662 00:23:34.662 Run status group 0 (all jobs): 00:23:34.662 READ: bw=87.0MiB/s (91.2MB/s), 24.9MiB/s-32.3MiB/s (26.1MB/s-33.9MB/s), io=436MiB (457MB), run=5003-5010msec 00:23:34.662 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:34.662 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:34.662 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:34.662 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:34.662 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:34.662 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 bdev_null0 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 [2024-07-15 22:18:20.529995] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 bdev_null1 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 bdev_null2 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:34.663 { 00:23:34.663 "params": { 00:23:34.663 "name": "Nvme$subsystem", 00:23:34.663 "trtype": "$TEST_TRANSPORT", 00:23:34.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.663 "adrfam": "ipv4", 00:23:34.663 "trsvcid": "$NVMF_PORT", 00:23:34.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.663 "hdgst": ${hdgst:-false}, 00:23:34.663 "ddgst": ${ddgst:-false} 00:23:34.663 }, 00:23:34.663 "method": "bdev_nvme_attach_controller" 00:23:34.663 } 00:23:34.663 EOF 00:23:34.663 )") 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:34.663 { 00:23:34.663 "params": { 00:23:34.663 "name": "Nvme$subsystem", 00:23:34.663 "trtype": "$TEST_TRANSPORT", 00:23:34.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.663 "adrfam": "ipv4", 00:23:34.663 "trsvcid": "$NVMF_PORT", 00:23:34.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.663 "hdgst": ${hdgst:-false}, 00:23:34.663 "ddgst": ${ddgst:-false} 00:23:34.663 }, 00:23:34.663 "method": "bdev_nvme_attach_controller" 00:23:34.663 } 00:23:34.663 EOF 00:23:34.663 )") 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:34.663 22:18:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:34.663 { 00:23:34.663 "params": { 00:23:34.664 "name": "Nvme$subsystem", 00:23:34.664 "trtype": "$TEST_TRANSPORT", 00:23:34.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.664 "adrfam": "ipv4", 00:23:34.664 "trsvcid": "$NVMF_PORT", 00:23:34.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.664 "hdgst": ${hdgst:-false}, 00:23:34.664 "ddgst": ${ddgst:-false} 00:23:34.664 }, 00:23:34.664 "method": "bdev_nvme_attach_controller" 00:23:34.664 } 00:23:34.664 EOF 00:23:34.664 )") 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:34.664 "params": { 00:23:34.664 "name": "Nvme0", 00:23:34.664 "trtype": "tcp", 00:23:34.664 "traddr": "10.0.0.2", 00:23:34.664 "adrfam": "ipv4", 00:23:34.664 "trsvcid": "4420", 00:23:34.664 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:34.664 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:34.664 "hdgst": false, 00:23:34.664 "ddgst": false 00:23:34.664 }, 00:23:34.664 "method": "bdev_nvme_attach_controller" 00:23:34.664 },{ 00:23:34.664 "params": { 00:23:34.664 "name": "Nvme1", 00:23:34.664 "trtype": "tcp", 00:23:34.664 "traddr": "10.0.0.2", 00:23:34.664 "adrfam": "ipv4", 00:23:34.664 "trsvcid": "4420", 00:23:34.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:34.664 "hdgst": false, 00:23:34.664 "ddgst": false 00:23:34.664 }, 00:23:34.664 "method": "bdev_nvme_attach_controller" 00:23:34.664 },{ 00:23:34.664 "params": { 00:23:34.664 "name": "Nvme2", 00:23:34.664 "trtype": "tcp", 00:23:34.664 "traddr": "10.0.0.2", 00:23:34.664 "adrfam": "ipv4", 00:23:34.664 "trsvcid": "4420", 00:23:34.664 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:34.664 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:34.664 "hdgst": false, 00:23:34.664 "ddgst": false 00:23:34.664 }, 00:23:34.664 "method": "bdev_nvme_attach_controller" 00:23:34.664 }' 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:34.664 22:18:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.664 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:34.664 ... 00:23:34.664 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:34.664 ... 00:23:34.664 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:34.664 ... 00:23:34.664 fio-3.35 00:23:34.664 Starting 24 threads 00:23:56.579 00:23:56.579 filename0: (groupid=0, jobs=1): err= 0: pid=98127: Mon Jul 15 22:18:41 2024 00:23:56.579 read: IOPS=262, BW=1049KiB/s (1075kB/s)(10.2MiB/10001msec) 00:23:56.579 slat (usec): min=3, max=4019, avg=14.52, stdev=110.71 00:23:56.579 clat (msec): min=2, max=122, avg=60.87, stdev=17.07 00:23:56.579 lat (msec): min=2, max=122, avg=60.88, stdev=17.07 00:23:56.579 clat percentiles (msec): 00:23:56.579 | 1.00th=[ 23], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 49], 00:23:56.579 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 62], 00:23:56.579 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 96], 00:23:56.579 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 123], 99.95th=[ 123], 00:23:56.579 | 99.99th=[ 123] 00:23:56.579 bw ( KiB/s): min= 896, max= 1154, per=2.66%, avg=1029.16, stdev=73.86, samples=19 00:23:56.579 iops : min= 224, max= 288, avg=257.16, stdev=18.33, samples=19 00:23:56.579 lat (msec) : 4=0.61%, 20=0.34%, 50=22.56%, 100=72.87%, 250=3.62% 00:23:56.579 cpu : usr=47.30%, sys=1.59%, ctx=1257, majf=0, minf=0 00:23:56.579 IO depths : 1=3.9%, 2=8.1%, 4=18.5%, 8=60.7%, 16=8.8%, 32=0.0%, >=64=0.0% 00:23:56.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 complete : 0=0.0%, 4=92.4%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 issued rwts: total=2624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.579 filename0: (groupid=0, jobs=1): err= 0: pid=98128: Mon Jul 15 22:18:41 2024 00:23:56.579 read: IOPS=362, BW=1451KiB/s (1486kB/s)(14.2MiB/10040msec) 00:23:56.579 slat (usec): min=4, max=4046, avg=14.73, stdev=115.58 00:23:56.579 clat (usec): min=1577, max=107021, avg=43946.65, stdev=19602.54 00:23:56.579 lat (usec): min=1585, max=107036, avg=43961.38, stdev=19606.73 00:23:56.579 clat percentiles (msec): 00:23:56.579 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 20], 20.00th=[ 24], 00:23:56.579 | 30.00th=[ 34], 40.00th=[ 40], 50.00th=[ 43], 60.00th=[ 48], 00:23:56.579 | 70.00th=[ 56], 80.00th=[ 61], 90.00th=[ 71], 95.00th=[ 79], 00:23:56.579 | 99.00th=[ 91], 99.50th=[ 96], 99.90th=[ 96], 99.95th=[ 96], 00:23:56.579 | 99.99th=[ 108] 00:23:56.579 bw ( KiB/s): min= 1024, max= 3556, per=3.67%, avg=1419.89, stdev=594.80, samples=19 00:23:56.579 iops : min= 256, max= 889, avg=354.89, stdev=148.73, samples=19 00:23:56.579 lat (msec) : 2=0.47%, 4=0.27%, 10=2.99%, 20=6.78%, 50=52.62% 00:23:56.579 lat (msec) : 100=36.84%, 250=0.03% 00:23:56.579 cpu : usr=44.56%, sys=1.53%, ctx=1290, majf=0, minf=9 00:23:56.579 IO depths : 1=1.6%, 2=3.6%, 4=12.0%, 8=71.4%, 16=11.3%, 32=0.0%, >=64=0.0% 00:23:56.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 complete : 0=0.0%, 4=90.4%, 8=4.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 issued rwts: total=3643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.579 filename0: (groupid=0, jobs=1): err= 0: pid=98129: Mon Jul 15 22:18:41 2024 00:23:56.579 read: IOPS=688, BW=2754KiB/s (2820kB/s)(27.0MiB/10021msec) 00:23:56.579 slat (usec): min=4, max=8026, avg=13.67, stdev=136.47 00:23:56.579 clat (usec): min=1824, max=56489, avg=23112.12, stdev=8578.17 00:23:56.579 lat (usec): min=1832, max=56498, avg=23125.79, stdev=8581.34 00:23:56.579 clat percentiles (usec): 00:23:56.579 | 1.00th=[ 6980], 5.00th=[10028], 10.00th=[11994], 20.00th=[15270], 00:23:56.579 | 30.00th=[20055], 40.00th=[22676], 50.00th=[23987], 60.00th=[23987], 00:23:56.579 | 70.00th=[24249], 80.00th=[28705], 90.00th=[35914], 95.00th=[35914], 00:23:56.579 | 99.00th=[47973], 99.50th=[47973], 99.90th=[47973], 99.95th=[56361], 00:23:56.579 | 99.99th=[56361] 00:23:56.579 bw ( KiB/s): min= 2016, max= 5162, per=7.12%, avg=2754.25, stdev=692.52, samples=20 00:23:56.579 iops : min= 504, max= 1290, avg=688.50, stdev=173.02, samples=20 00:23:56.579 lat (msec) : 2=0.09%, 4=0.33%, 10=4.55%, 20=25.41%, 50=69.57% 00:23:56.579 lat (msec) : 100=0.06% 00:23:56.579 cpu : usr=35.09%, sys=1.40%, ctx=983, majf=0, minf=9 00:23:56.579 IO depths : 1=1.4%, 2=3.0%, 4=11.3%, 8=72.8%, 16=11.5%, 32=0.0%, >=64=0.0% 00:23:56.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 complete : 0=0.0%, 4=90.5%, 8=4.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 issued rwts: total=6900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.579 filename0: (groupid=0, jobs=1): err= 0: pid=98130: Mon Jul 15 22:18:41 2024 00:23:56.579 read: IOPS=305, BW=1222KiB/s (1251kB/s)(11.9MiB/10010msec) 00:23:56.579 slat (usec): min=6, max=4025, avg=17.41, stdev=145.19 00:23:56.579 clat (msec): min=15, max=102, avg=52.25, stdev=15.46 00:23:56.579 lat (msec): min=15, max=102, avg=52.27, stdev=15.46 00:23:56.579 clat percentiles (msec): 00:23:56.579 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 38], 00:23:56.579 | 30.00th=[ 42], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 56], 00:23:56.579 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 72], 95.00th=[ 81], 00:23:56.579 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 104], 99.95th=[ 104], 00:23:56.579 | 99.99th=[ 104] 00:23:56.579 bw ( KiB/s): min= 984, max= 2000, per=3.15%, avg=1220.40, stdev=254.32, samples=20 00:23:56.579 iops : min= 246, max= 500, avg=305.10, stdev=63.58, samples=20 00:23:56.579 lat (msec) : 20=0.20%, 50=47.92%, 100=51.68%, 250=0.20% 00:23:56.579 cpu : usr=51.90%, sys=1.78%, ctx=1212, majf=0, minf=0 00:23:56.579 IO depths : 1=1.7%, 2=3.5%, 4=11.1%, 8=72.6%, 16=11.2%, 32=0.0%, >=64=0.0% 00:23:56.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 complete : 0=0.0%, 4=90.2%, 8=4.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 issued rwts: total=3057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.579 filename0: (groupid=0, jobs=1): err= 0: pid=98131: Mon Jul 15 22:18:41 2024 00:23:56.579 read: IOPS=265, BW=1063KiB/s (1088kB/s)(10.4MiB/10010msec) 00:23:56.579 slat (usec): min=7, max=4047, avg=16.36, stdev=135.28 00:23:56.579 clat (msec): min=21, max=147, avg=60.08, stdev=17.27 00:23:56.579 lat (msec): min=21, max=147, avg=60.09, stdev=17.27 00:23:56.579 clat percentiles (msec): 00:23:56.579 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 48], 00:23:56.579 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 62], 00:23:56.579 | 70.00th=[ 66], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 95], 00:23:56.579 | 99.00th=[ 118], 99.50th=[ 133], 99.90th=[ 148], 99.95th=[ 148], 00:23:56.579 | 99.99th=[ 148] 00:23:56.579 bw ( KiB/s): min= 896, max= 1384, per=2.73%, avg=1057.20, stdev=145.24, samples=20 00:23:56.579 iops : min= 224, max= 346, avg=264.30, stdev=36.31, samples=20 00:23:56.579 lat (msec) : 50=29.00%, 100=68.45%, 250=2.56% 00:23:56.579 cpu : usr=47.71%, sys=1.49%, ctx=1460, majf=0, minf=0 00:23:56.579 IO depths : 1=3.5%, 2=7.3%, 4=16.8%, 8=63.1%, 16=9.3%, 32=0.0%, >=64=0.0% 00:23:56.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 complete : 0=0.0%, 4=92.1%, 8=2.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 issued rwts: total=2659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.579 filename0: (groupid=0, jobs=1): err= 0: pid=98132: Mon Jul 15 22:18:41 2024 00:23:56.579 read: IOPS=260, BW=1043KiB/s (1068kB/s)(10.2MiB/10007msec) 00:23:56.579 slat (usec): min=4, max=5946, avg=15.83, stdev=140.55 00:23:56.579 clat (msec): min=6, max=158, avg=61.22, stdev=18.03 00:23:56.579 lat (msec): min=6, max=158, avg=61.24, stdev=18.03 00:23:56.579 clat percentiles (msec): 00:23:56.579 | 1.00th=[ 30], 5.00th=[ 38], 10.00th=[ 42], 20.00th=[ 50], 00:23:56.579 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 63], 00:23:56.579 | 70.00th=[ 67], 80.00th=[ 73], 90.00th=[ 81], 95.00th=[ 88], 00:23:56.579 | 99.00th=[ 117], 99.50th=[ 159], 99.90th=[ 159], 99.95th=[ 159], 00:23:56.579 | 99.99th=[ 159] 00:23:56.579 bw ( KiB/s): min= 826, max= 1376, per=2.68%, avg=1035.00, stdev=133.20, samples=19 00:23:56.579 iops : min= 206, max= 344, avg=258.63, stdev=33.31, samples=19 00:23:56.579 lat (msec) : 10=0.08%, 20=0.54%, 50=23.75%, 100=73.18%, 250=2.45% 00:23:56.579 cpu : usr=47.37%, sys=1.51%, ctx=1522, majf=0, minf=0 00:23:56.579 IO depths : 1=2.8%, 2=6.1%, 4=15.9%, 8=65.2%, 16=10.0%, 32=0.0%, >=64=0.0% 00:23:56.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 complete : 0=0.0%, 4=91.5%, 8=3.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 issued rwts: total=2610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.579 filename0: (groupid=0, jobs=1): err= 0: pid=98133: Mon Jul 15 22:18:41 2024 00:23:56.579 read: IOPS=767, BW=3069KiB/s (3143kB/s)(30.0MiB/10014msec) 00:23:56.579 slat (usec): min=5, max=8033, avg=16.87, stdev=195.47 00:23:56.579 clat (usec): min=1982, max=63915, avg=20728.46, stdev=8251.80 00:23:56.579 lat (usec): min=1990, max=63923, avg=20745.33, stdev=8253.07 00:23:56.579 clat percentiles (usec): 00:23:56.579 | 1.00th=[ 4555], 5.00th=[ 8848], 10.00th=[10552], 20.00th=[14222], 00:23:56.579 | 30.00th=[15795], 40.00th=[16909], 50.00th=[21365], 60.00th=[23462], 00:23:56.579 | 70.00th=[23987], 80.00th=[25035], 90.00th=[32113], 95.00th=[35914], 00:23:56.579 | 99.00th=[46400], 99.50th=[47973], 99.90th=[61604], 99.95th=[63701], 00:23:56.579 | 99.99th=[63701] 00:23:56.579 bw ( KiB/s): min= 2256, max= 4861, per=7.93%, avg=3066.00, stdev=671.05, samples=20 00:23:56.579 iops : min= 564, max= 1215, avg=766.45, stdev=167.74, samples=20 00:23:56.579 lat (msec) : 2=0.04%, 4=0.61%, 10=7.90%, 20=38.99%, 50=52.17% 00:23:56.579 lat (msec) : 100=0.29% 00:23:56.579 cpu : usr=42.19%, sys=1.81%, ctx=1121, majf=0, minf=9 00:23:56.579 IO depths : 1=1.2%, 2=2.5%, 4=9.5%, 8=75.0%, 16=11.8%, 32=0.0%, >=64=0.0% 00:23:56.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 complete : 0=0.0%, 4=89.7%, 8=5.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 issued rwts: total=7684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.579 filename0: (groupid=0, jobs=1): err= 0: pid=98134: Mon Jul 15 22:18:41 2024 00:23:56.579 read: IOPS=681, BW=2725KiB/s (2791kB/s)(26.6MiB/10010msec) 00:23:56.579 slat (usec): min=4, max=9048, avg=23.65, stdev=252.07 00:23:56.579 clat (usec): min=1167, max=49517, avg=23323.55, stdev=8391.09 00:23:56.579 lat (usec): min=1183, max=49531, avg=23347.20, stdev=8399.40 00:23:56.579 clat percentiles (usec): 00:23:56.579 | 1.00th=[ 6390], 5.00th=[10290], 10.00th=[12387], 20.00th=[15008], 00:23:56.579 | 30.00th=[20579], 40.00th=[22938], 50.00th=[23725], 60.00th=[23987], 00:23:56.579 | 70.00th=[24249], 80.00th=[29230], 90.00th=[35914], 95.00th=[36439], 00:23:56.579 | 99.00th=[47973], 99.50th=[47973], 99.90th=[49546], 99.95th=[49546], 00:23:56.579 | 99.99th=[49546] 00:23:56.579 bw ( KiB/s): min= 2232, max= 4637, per=7.04%, avg=2724.05, stdev=582.98, samples=20 00:23:56.579 iops : min= 558, max= 1159, avg=680.95, stdev=145.69, samples=20 00:23:56.579 lat (msec) : 2=0.04%, 4=0.82%, 10=3.90%, 20=23.28%, 50=71.95% 00:23:56.579 cpu : usr=34.59%, sys=1.45%, ctx=1013, majf=0, minf=9 00:23:56.579 IO depths : 1=1.2%, 2=2.5%, 4=9.4%, 8=75.2%, 16=11.8%, 32=0.0%, >=64=0.0% 00:23:56.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 complete : 0=0.0%, 4=89.9%, 8=5.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.579 issued rwts: total=6820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.579 filename1: (groupid=0, jobs=1): err= 0: pid=98135: Mon Jul 15 22:18:41 2024 00:23:56.579 read: IOPS=310, BW=1242KiB/s (1272kB/s)(12.2MiB/10035msec) 00:23:56.579 slat (usec): min=4, max=4034, avg=17.11, stdev=153.63 00:23:56.580 clat (msec): min=17, max=125, avg=51.36, stdev=15.80 00:23:56.580 lat (msec): min=17, max=125, avg=51.37, stdev=15.80 00:23:56.580 clat percentiles (msec): 00:23:56.580 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 39], 00:23:56.580 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 50], 60.00th=[ 55], 00:23:56.580 | 70.00th=[ 58], 80.00th=[ 64], 90.00th=[ 72], 95.00th=[ 84], 00:23:56.580 | 99.00th=[ 100], 99.50th=[ 100], 99.90th=[ 114], 99.95th=[ 126], 00:23:56.580 | 99.99th=[ 126] 00:23:56.580 bw ( KiB/s): min= 1024, max= 1680, per=3.21%, avg=1240.25, stdev=173.66, samples=20 00:23:56.580 iops : min= 256, max= 420, avg=310.05, stdev=43.41, samples=20 00:23:56.580 lat (msec) : 20=0.19%, 50=52.10%, 100=47.26%, 250=0.45% 00:23:56.580 cpu : usr=47.17%, sys=1.38%, ctx=1403, majf=0, minf=0 00:23:56.580 IO depths : 1=1.3%, 2=3.2%, 4=10.9%, 8=72.8%, 16=11.8%, 32=0.0%, >=64=0.0% 00:23:56.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 complete : 0=0.0%, 4=90.3%, 8=4.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 issued rwts: total=3117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.580 filename1: (groupid=0, jobs=1): err= 0: pid=98136: Mon Jul 15 22:18:41 2024 00:23:56.580 read: IOPS=785, BW=3142KiB/s (3217kB/s)(30.7MiB/10013msec) 00:23:56.580 slat (usec): min=4, max=9026, avg=18.82, stdev=221.96 00:23:56.580 clat (usec): min=1889, max=59484, avg=20251.89, stdev=8687.75 00:23:56.580 lat (usec): min=1897, max=59493, avg=20270.71, stdev=8690.50 00:23:56.580 clat percentiles (usec): 00:23:56.580 | 1.00th=[ 4686], 5.00th=[ 8029], 10.00th=[ 9634], 20.00th=[13304], 00:23:56.580 | 30.00th=[15401], 40.00th=[16057], 50.00th=[18744], 60.00th=[22152], 00:23:56.580 | 70.00th=[23987], 80.00th=[25822], 90.00th=[32900], 95.00th=[35914], 00:23:56.580 | 99.00th=[45876], 99.50th=[47973], 99.90th=[58983], 99.95th=[59507], 00:23:56.580 | 99.99th=[59507] 00:23:56.580 bw ( KiB/s): min= 2160, max= 4811, per=8.12%, avg=3139.30, stdev=730.45, samples=20 00:23:56.580 iops : min= 540, max= 1202, avg=784.75, stdev=182.45, samples=20 00:23:56.580 lat (msec) : 2=0.11%, 4=0.74%, 10=9.69%, 20=41.95%, 50=47.41% 00:23:56.580 lat (msec) : 100=0.10% 00:23:56.580 cpu : usr=41.44%, sys=1.86%, ctx=1267, majf=0, minf=9 00:23:56.580 IO depths : 1=1.0%, 2=2.1%, 4=9.2%, 8=75.8%, 16=11.9%, 32=0.0%, >=64=0.0% 00:23:56.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 complete : 0=0.0%, 4=89.8%, 8=5.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 issued rwts: total=7864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.580 filename1: (groupid=0, jobs=1): err= 0: pid=98137: Mon Jul 15 22:18:41 2024 00:23:56.580 read: IOPS=399, BW=1600KiB/s (1638kB/s)(15.7MiB/10036msec) 00:23:56.580 slat (usec): min=4, max=8046, avg=20.79, stdev=210.69 00:23:56.580 clat (usec): min=1898, max=95849, avg=39844.73, stdev=20016.97 00:23:56.580 lat (usec): min=1906, max=95861, avg=39865.53, stdev=20015.95 00:23:56.580 clat percentiles (usec): 00:23:56.580 | 1.00th=[ 6587], 5.00th=[ 8094], 10.00th=[12780], 20.00th=[16188], 00:23:56.580 | 30.00th=[31065], 40.00th=[36963], 50.00th=[40109], 60.00th=[43779], 00:23:56.580 | 70.00th=[47973], 80.00th=[56361], 90.00th=[65274], 95.00th=[76022], 00:23:56.580 | 99.00th=[87557], 99.50th=[91751], 99.90th=[95945], 99.95th=[95945], 00:23:56.580 | 99.99th=[95945] 00:23:56.580 bw ( KiB/s): min= 1024, max= 3919, per=4.14%, avg=1600.95, stdev=818.08, samples=19 00:23:56.580 iops : min= 256, max= 979, avg=400.11, stdev=204.43, samples=19 00:23:56.580 lat (msec) : 2=0.15%, 4=0.30%, 10=7.55%, 20=13.33%, 50=51.69% 00:23:56.580 lat (msec) : 100=26.98% 00:23:56.580 cpu : usr=46.37%, sys=1.51%, ctx=1456, majf=0, minf=9 00:23:56.580 IO depths : 1=1.0%, 2=2.1%, 4=8.5%, 8=76.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:23:56.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 complete : 0=0.0%, 4=89.7%, 8=5.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 issued rwts: total=4014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.580 filename1: (groupid=0, jobs=1): err= 0: pid=98138: Mon Jul 15 22:18:41 2024 00:23:56.580 read: IOPS=254, BW=1020KiB/s (1044kB/s)(9.96MiB/10007msec) 00:23:56.580 slat (usec): min=3, max=4041, avg=18.47, stdev=143.66 00:23:56.580 clat (msec): min=14, max=155, avg=62.59, stdev=16.10 00:23:56.580 lat (msec): min=14, max=155, avg=62.61, stdev=16.10 00:23:56.580 clat percentiles (msec): 00:23:56.580 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 54], 00:23:56.580 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 63], 00:23:56.580 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 86], 00:23:56.580 | 99.00th=[ 121], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:23:56.580 | 99.99th=[ 157] 00:23:56.580 bw ( KiB/s): min= 768, max= 1154, per=2.59%, avg=1002.84, stdev=93.43, samples=19 00:23:56.580 iops : min= 192, max= 288, avg=250.58, stdev=23.32, samples=19 00:23:56.580 lat (msec) : 20=0.63%, 50=14.19%, 100=82.05%, 250=3.14% 00:23:56.580 cpu : usr=46.78%, sys=1.62%, ctx=1274, majf=0, minf=0 00:23:56.580 IO depths : 1=4.3%, 2=9.1%, 4=20.5%, 8=57.8%, 16=8.2%, 32=0.0%, >=64=0.0% 00:23:56.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 complete : 0=0.0%, 4=92.9%, 8=1.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 issued rwts: total=2551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.580 filename1: (groupid=0, jobs=1): err= 0: pid=98139: Mon Jul 15 22:18:41 2024 00:23:56.580 read: IOPS=259, BW=1038KiB/s (1062kB/s)(10.1MiB/10012msec) 00:23:56.580 slat (nsec): min=6528, max=80850, avg=12503.01, stdev=5799.59 00:23:56.580 clat (msec): min=14, max=157, avg=61.60, stdev=17.20 00:23:56.580 lat (msec): min=15, max=157, avg=61.61, stdev=17.20 00:23:56.580 clat percentiles (msec): 00:23:56.580 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 41], 20.00th=[ 48], 00:23:56.580 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 65], 00:23:56.580 | 70.00th=[ 69], 80.00th=[ 75], 90.00th=[ 82], 95.00th=[ 88], 00:23:56.580 | 99.00th=[ 109], 99.50th=[ 127], 99.90th=[ 159], 99.95th=[ 159], 00:23:56.580 | 99.99th=[ 159] 00:23:56.580 bw ( KiB/s): min= 816, max= 1504, per=2.67%, avg=1034.80, stdev=194.67, samples=20 00:23:56.580 iops : min= 204, max= 376, avg=258.70, stdev=48.67, samples=20 00:23:56.580 lat (msec) : 20=0.15%, 50=22.80%, 100=75.16%, 250=1.89% 00:23:56.580 cpu : usr=46.80%, sys=1.33%, ctx=1467, majf=0, minf=0 00:23:56.580 IO depths : 1=0.1%, 2=0.3%, 4=4.2%, 8=79.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:56.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 complete : 0=0.0%, 4=89.3%, 8=8.1%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 issued rwts: total=2597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.580 filename1: (groupid=0, jobs=1): err= 0: pid=98140: Mon Jul 15 22:18:41 2024 00:23:56.580 read: IOPS=312, BW=1251KiB/s (1281kB/s)(12.2MiB/10018msec) 00:23:56.580 slat (usec): min=4, max=4036, avg=15.51, stdev=135.49 00:23:56.580 clat (msec): min=24, max=143, avg=51.05, stdev=17.39 00:23:56.580 lat (msec): min=24, max=144, avg=51.07, stdev=17.39 00:23:56.580 clat percentiles (msec): 00:23:56.580 | 1.00th=[ 25], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 38], 00:23:56.580 | 30.00th=[ 41], 40.00th=[ 43], 50.00th=[ 48], 60.00th=[ 54], 00:23:56.580 | 70.00th=[ 58], 80.00th=[ 63], 90.00th=[ 72], 95.00th=[ 84], 00:23:56.580 | 99.00th=[ 105], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:23:56.580 | 99.99th=[ 144] 00:23:56.580 bw ( KiB/s): min= 1024, max= 1488, per=3.21%, avg=1242.63, stdev=143.38, samples=19 00:23:56.580 iops : min= 256, max= 372, avg=310.63, stdev=35.85, samples=19 00:23:56.580 lat (msec) : 50=56.35%, 100=42.15%, 250=1.50% 00:23:56.580 cpu : usr=47.69%, sys=1.80%, ctx=1382, majf=0, minf=0 00:23:56.580 IO depths : 1=1.4%, 2=2.9%, 4=10.2%, 8=73.9%, 16=11.7%, 32=0.0%, >=64=0.0% 00:23:56.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 complete : 0=0.0%, 4=90.2%, 8=4.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 issued rwts: total=3132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.580 filename1: (groupid=0, jobs=1): err= 0: pid=98141: Mon Jul 15 22:18:41 2024 00:23:56.580 read: IOPS=265, BW=1060KiB/s (1086kB/s)(10.4MiB/10020msec) 00:23:56.580 slat (usec): min=5, max=8043, avg=20.25, stdev=209.91 00:23:56.580 clat (msec): min=24, max=122, avg=60.14, stdev=15.26 00:23:56.580 lat (msec): min=24, max=122, avg=60.16, stdev=15.25 00:23:56.580 clat percentiles (msec): 00:23:56.580 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 48], 00:23:56.580 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 63], 00:23:56.580 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 88], 00:23:56.580 | 99.00th=[ 107], 99.50th=[ 123], 99.90th=[ 123], 99.95th=[ 123], 00:23:56.580 | 99.99th=[ 123] 00:23:56.580 bw ( KiB/s): min= 784, max= 1259, per=2.74%, avg=1059.35, stdev=125.87, samples=20 00:23:56.580 iops : min= 196, max= 314, avg=264.80, stdev=31.40, samples=20 00:23:56.580 lat (msec) : 50=24.36%, 100=73.64%, 250=2.00% 00:23:56.580 cpu : usr=48.26%, sys=1.40%, ctx=1263, majf=0, minf=0 00:23:56.580 IO depths : 1=2.5%, 2=5.5%, 4=15.4%, 8=66.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:23:56.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 complete : 0=0.0%, 4=91.3%, 8=3.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 issued rwts: total=2656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.580 filename1: (groupid=0, jobs=1): err= 0: pid=98142: Mon Jul 15 22:18:41 2024 00:23:56.580 read: IOPS=277, BW=1109KiB/s (1135kB/s)(10.8MiB/10004msec) 00:23:56.580 slat (usec): min=7, max=5018, avg=14.94, stdev=121.88 00:23:56.580 clat (msec): min=16, max=135, avg=57.60, stdev=15.66 00:23:56.580 lat (msec): min=16, max=135, avg=57.61, stdev=15.66 00:23:56.580 clat percentiles (msec): 00:23:56.580 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 46], 00:23:56.580 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 60], 00:23:56.580 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 81], 95.00th=[ 88], 00:23:56.580 | 99.00th=[ 96], 99.50th=[ 106], 99.90th=[ 136], 99.95th=[ 136], 00:23:56.580 | 99.99th=[ 136] 00:23:56.580 bw ( KiB/s): min= 880, max= 1504, per=2.83%, avg=1093.47, stdev=149.05, samples=19 00:23:56.580 iops : min= 220, max= 376, avg=273.37, stdev=37.26, samples=19 00:23:56.580 lat (msec) : 20=0.58%, 50=31.63%, 100=66.97%, 250=0.83% 00:23:56.580 cpu : usr=47.26%, sys=1.78%, ctx=1264, majf=0, minf=0 00:23:56.580 IO depths : 1=3.1%, 2=6.7%, 4=16.5%, 8=64.1%, 16=9.7%, 32=0.0%, >=64=0.0% 00:23:56.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 complete : 0=0.0%, 4=91.9%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.580 issued rwts: total=2773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.580 filename2: (groupid=0, jobs=1): err= 0: pid=98143: Mon Jul 15 22:18:41 2024 00:23:56.580 read: IOPS=302, BW=1210KiB/s (1239kB/s)(11.8MiB/10010msec) 00:23:56.580 slat (usec): min=6, max=4025, avg=16.29, stdev=145.74 00:23:56.580 clat (msec): min=12, max=122, avg=52.77, stdev=16.37 00:23:56.580 lat (msec): min=12, max=122, avg=52.79, stdev=16.36 00:23:56.580 clat percentiles (msec): 00:23:56.580 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 39], 00:23:56.580 | 30.00th=[ 42], 40.00th=[ 47], 50.00th=[ 51], 60.00th=[ 55], 00:23:56.581 | 70.00th=[ 59], 80.00th=[ 65], 90.00th=[ 74], 95.00th=[ 81], 00:23:56.581 | 99.00th=[ 96], 99.50th=[ 122], 99.90th=[ 122], 99.95th=[ 123], 00:23:56.581 | 99.99th=[ 123] 00:23:56.581 bw ( KiB/s): min= 768, max= 1456, per=3.12%, avg=1205.20, stdev=187.46, samples=20 00:23:56.581 iops : min= 192, max= 364, avg=301.30, stdev=46.86, samples=20 00:23:56.581 lat (msec) : 20=0.20%, 50=49.85%, 100=49.19%, 250=0.76% 00:23:56.581 cpu : usr=48.61%, sys=1.45%, ctx=1253, majf=0, minf=0 00:23:56.581 IO depths : 1=1.5%, 2=3.1%, 4=10.1%, 8=73.3%, 16=12.1%, 32=0.0%, >=64=0.0% 00:23:56.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 issued rwts: total=3029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.581 filename2: (groupid=0, jobs=1): err= 0: pid=98144: Mon Jul 15 22:18:41 2024 00:23:56.581 read: IOPS=309, BW=1239KiB/s (1269kB/s)(12.1MiB/10032msec) 00:23:56.581 slat (usec): min=6, max=4143, avg=17.93, stdev=162.04 00:23:56.581 clat (msec): min=21, max=143, avg=51.48, stdev=16.06 00:23:56.581 lat (msec): min=21, max=143, avg=51.49, stdev=16.06 00:23:56.581 clat percentiles (msec): 00:23:56.581 | 1.00th=[ 25], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 39], 00:23:56.581 | 30.00th=[ 41], 40.00th=[ 45], 50.00th=[ 49], 60.00th=[ 55], 00:23:56.581 | 70.00th=[ 57], 80.00th=[ 64], 90.00th=[ 72], 95.00th=[ 81], 00:23:56.581 | 99.00th=[ 105], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 144], 00:23:56.581 | 99.99th=[ 144] 00:23:56.581 bw ( KiB/s): min= 824, max= 1536, per=3.20%, avg=1236.40, stdev=188.97, samples=20 00:23:56.581 iops : min= 206, max= 384, avg=309.10, stdev=47.24, samples=20 00:23:56.581 lat (msec) : 50=52.72%, 100=45.96%, 250=1.32% 00:23:56.581 cpu : usr=49.61%, sys=1.52%, ctx=1330, majf=0, minf=0 00:23:56.581 IO depths : 1=2.3%, 2=4.7%, 4=12.8%, 8=69.3%, 16=11.0%, 32=0.0%, >=64=0.0% 00:23:56.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 issued rwts: total=3107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.581 filename2: (groupid=0, jobs=1): err= 0: pid=98145: Mon Jul 15 22:18:41 2024 00:23:56.581 read: IOPS=696, BW=2785KiB/s (2852kB/s)(27.2MiB/10015msec) 00:23:56.581 slat (usec): min=5, max=8168, avg=43.41, stdev=348.81 00:23:56.581 clat (usec): min=2036, max=51059, avg=22715.35, stdev=8098.68 00:23:56.581 lat (usec): min=2057, max=51078, avg=22758.76, stdev=8103.89 00:23:56.581 clat percentiles (usec): 00:23:56.581 | 1.00th=[ 7373], 5.00th=[11076], 10.00th=[13698], 20.00th=[15795], 00:23:56.581 | 30.00th=[17695], 40.00th=[20841], 50.00th=[22938], 60.00th=[23725], 00:23:56.581 | 70.00th=[24249], 80.00th=[28705], 90.00th=[34341], 95.00th=[38011], 00:23:56.581 | 99.00th=[46924], 99.50th=[47973], 99.90th=[51119], 99.95th=[51119], 00:23:56.581 | 99.99th=[51119] 00:23:56.581 bw ( KiB/s): min= 2176, max= 3800, per=7.20%, avg=2784.95, stdev=462.63, samples=20 00:23:56.581 iops : min= 544, max= 950, avg=696.20, stdev=115.64, samples=20 00:23:56.581 lat (msec) : 4=0.27%, 10=3.57%, 20=32.91%, 50=63.01%, 100=0.24% 00:23:56.581 cpu : usr=41.84%, sys=2.08%, ctx=1293, majf=0, minf=9 00:23:56.581 IO depths : 1=1.8%, 2=4.0%, 4=12.3%, 8=70.6%, 16=11.3%, 32=0.0%, >=64=0.0% 00:23:56.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 issued rwts: total=6974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.581 filename2: (groupid=0, jobs=1): err= 0: pid=98146: Mon Jul 15 22:18:41 2024 00:23:56.581 read: IOPS=276, BW=1107KiB/s (1134kB/s)(10.8MiB/10012msec) 00:23:56.581 slat (usec): min=3, max=4035, avg=14.65, stdev=108.15 00:23:56.581 clat (msec): min=16, max=124, avg=57.68, stdev=16.71 00:23:56.581 lat (msec): min=16, max=124, avg=57.70, stdev=16.71 00:23:56.581 clat percentiles (msec): 00:23:56.581 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 42], 00:23:56.581 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 61], 00:23:56.581 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 81], 95.00th=[ 88], 00:23:56.581 | 99.00th=[ 112], 99.50th=[ 120], 99.90th=[ 125], 99.95th=[ 125], 00:23:56.581 | 99.99th=[ 125] 00:23:56.581 bw ( KiB/s): min= 896, max= 1544, per=2.85%, avg=1102.40, stdev=159.36, samples=20 00:23:56.581 iops : min= 224, max= 386, avg=275.60, stdev=39.84, samples=20 00:23:56.581 lat (msec) : 20=0.58%, 50=31.24%, 100=66.05%, 250=2.13% 00:23:56.581 cpu : usr=46.68%, sys=1.61%, ctx=1371, majf=0, minf=0 00:23:56.581 IO depths : 1=2.6%, 2=5.6%, 4=14.5%, 8=67.0%, 16=10.4%, 32=0.0%, >=64=0.0% 00:23:56.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 complete : 0=0.0%, 4=91.2%, 8=3.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 issued rwts: total=2772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.581 filename2: (groupid=0, jobs=1): err= 0: pid=98147: Mon Jul 15 22:18:41 2024 00:23:56.581 read: IOPS=734, BW=2938KiB/s (3008kB/s)(28.8MiB/10028msec) 00:23:56.581 slat (usec): min=3, max=6568, avg=14.67, stdev=128.54 00:23:56.581 clat (usec): min=982, max=61803, avg=21683.77, stdev=9665.30 00:23:56.581 lat (usec): min=990, max=61818, avg=21698.44, stdev=9667.00 00:23:56.581 clat percentiles (usec): 00:23:56.581 | 1.00th=[ 1663], 5.00th=[ 3916], 10.00th=[ 9765], 20.00th=[14091], 00:23:56.581 | 30.00th=[15926], 40.00th=[19792], 50.00th=[22152], 60.00th=[23987], 00:23:56.581 | 70.00th=[24773], 80.00th=[28967], 90.00th=[34341], 95.00th=[37487], 00:23:56.581 | 99.00th=[47973], 99.50th=[47973], 99.90th=[57934], 99.95th=[58983], 00:23:56.581 | 99.99th=[61604] 00:23:56.581 bw ( KiB/s): min= 2272, max= 5648, per=7.60%, avg=2940.60, stdev=791.32, samples=20 00:23:56.581 iops : min= 568, max= 1412, avg=735.15, stdev=197.83, samples=20 00:23:56.581 lat (usec) : 1000=0.04% 00:23:56.581 lat (msec) : 2=1.72%, 4=3.41%, 10=5.21%, 20=30.84%, 50=58.52% 00:23:56.581 lat (msec) : 100=0.26% 00:23:56.581 cpu : usr=41.38%, sys=1.88%, ctx=1357, majf=0, minf=9 00:23:56.581 IO depths : 1=0.6%, 2=1.2%, 4=7.5%, 8=78.0%, 16=12.7%, 32=0.0%, >=64=0.0% 00:23:56.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 complete : 0=0.0%, 4=89.6%, 8=5.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 issued rwts: total=7365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.581 filename2: (groupid=0, jobs=1): err= 0: pid=98148: Mon Jul 15 22:18:41 2024 00:23:56.581 read: IOPS=326, BW=1308KiB/s (1339kB/s)(12.8MiB/10004msec) 00:23:56.581 slat (usec): min=4, max=4045, avg=16.35, stdev=140.65 00:23:56.581 clat (msec): min=23, max=126, avg=48.82, stdev=13.57 00:23:56.581 lat (msec): min=23, max=126, avg=48.84, stdev=13.57 00:23:56.581 clat percentiles (msec): 00:23:56.581 | 1.00th=[ 25], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 38], 00:23:56.581 | 30.00th=[ 41], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 50], 00:23:56.581 | 70.00th=[ 56], 80.00th=[ 60], 90.00th=[ 65], 95.00th=[ 73], 00:23:56.581 | 99.00th=[ 85], 99.50th=[ 95], 99.90th=[ 127], 99.95th=[ 127], 00:23:56.581 | 99.99th=[ 127] 00:23:56.581 bw ( KiB/s): min= 952, max= 1744, per=3.36%, avg=1300.63, stdev=203.16, samples=19 00:23:56.581 iops : min= 238, max= 436, avg=325.16, stdev=50.79, samples=19 00:23:56.581 lat (msec) : 50=61.14%, 100=38.46%, 250=0.40% 00:23:56.581 cpu : usr=48.07%, sys=1.55%, ctx=1381, majf=0, minf=0 00:23:56.581 IO depths : 1=1.0%, 2=2.2%, 4=8.5%, 8=75.9%, 16=12.3%, 32=0.0%, >=64=0.0% 00:23:56.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 complete : 0=0.0%, 4=89.8%, 8=5.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 issued rwts: total=3271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.581 filename2: (groupid=0, jobs=1): err= 0: pid=98149: Mon Jul 15 22:18:41 2024 00:23:56.581 read: IOPS=325, BW=1301KiB/s (1332kB/s)(12.7MiB/10032msec) 00:23:56.581 slat (usec): min=4, max=8029, avg=21.83, stdev=222.05 00:23:56.581 clat (usec): min=16010, max=98599, avg=48996.48, stdev=14621.97 00:23:56.581 lat (usec): min=16019, max=98626, avg=49018.32, stdev=14622.34 00:23:56.581 clat percentiles (usec): 00:23:56.581 | 1.00th=[23987], 5.00th=[31589], 10.00th=[32637], 20.00th=[37487], 00:23:56.581 | 30.00th=[39584], 40.00th=[41681], 50.00th=[46924], 60.00th=[50594], 00:23:56.581 | 70.00th=[55313], 80.00th=[60556], 90.00th=[69731], 95.00th=[74974], 00:23:56.581 | 99.00th=[95945], 99.50th=[98042], 99.90th=[98042], 99.95th=[99091], 00:23:56.581 | 99.99th=[99091] 00:23:56.581 bw ( KiB/s): min= 896, max= 1584, per=3.36%, avg=1298.80, stdev=217.49, samples=20 00:23:56.581 iops : min= 224, max= 396, avg=324.70, stdev=54.37, samples=20 00:23:56.581 lat (msec) : 20=0.18%, 50=59.33%, 100=40.48% 00:23:56.581 cpu : usr=46.99%, sys=1.49%, ctx=1282, majf=0, minf=0 00:23:56.581 IO depths : 1=1.0%, 2=2.1%, 4=9.0%, 8=75.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:23:56.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 issued rwts: total=3263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.581 filename2: (groupid=0, jobs=1): err= 0: pid=98150: Mon Jul 15 22:18:41 2024 00:23:56.581 read: IOPS=258, BW=1036KiB/s (1061kB/s)(10.1MiB/10012msec) 00:23:56.581 slat (usec): min=6, max=4025, avg=16.98, stdev=136.49 00:23:56.581 clat (msec): min=14, max=145, avg=61.62, stdev=16.55 00:23:56.581 lat (msec): min=14, max=145, avg=61.64, stdev=16.54 00:23:56.581 clat percentiles (msec): 00:23:56.581 | 1.00th=[ 30], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 52], 00:23:56.581 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 63], 00:23:56.581 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 91], 00:23:56.581 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 146], 99.95th=[ 146], 00:23:56.581 | 99.99th=[ 146] 00:23:56.581 bw ( KiB/s): min= 864, max= 1280, per=2.67%, avg=1032.40, stdev=121.56, samples=20 00:23:56.581 iops : min= 216, max= 320, avg=258.10, stdev=30.39, samples=20 00:23:56.581 lat (msec) : 20=0.23%, 50=18.43%, 100=78.67%, 250=2.66% 00:23:56.581 cpu : usr=47.19%, sys=1.56%, ctx=1348, majf=0, minf=0 00:23:56.581 IO depths : 1=2.4%, 2=5.3%, 4=14.6%, 8=66.4%, 16=11.3%, 32=0.0%, >=64=0.0% 00:23:56.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 complete : 0=0.0%, 4=91.7%, 8=3.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.581 issued rwts: total=2593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.581 00:23:56.581 Run status group 0 (all jobs): 00:23:56.581 READ: bw=37.8MiB/s (39.6MB/s), 1020KiB/s-3142KiB/s (1044kB/s-3217kB/s), io=379MiB (398MB), run=10001-10040msec 00:23:56.581 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 bdev_null0 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 [2024-07-15 22:18:42.150021] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 bdev_null1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.582 { 00:23:56.582 "params": { 00:23:56.582 "name": "Nvme$subsystem", 00:23:56.582 "trtype": "$TEST_TRANSPORT", 00:23:56.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.582 "adrfam": "ipv4", 00:23:56.582 "trsvcid": "$NVMF_PORT", 00:23:56.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.582 "hdgst": ${hdgst:-false}, 00:23:56.582 "ddgst": ${ddgst:-false} 00:23:56.582 }, 00:23:56.582 "method": "bdev_nvme_attach_controller" 00:23:56.582 } 00:23:56.582 EOF 00:23:56.582 )") 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:56.582 22:18:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.582 { 00:23:56.582 "params": { 00:23:56.582 "name": "Nvme$subsystem", 00:23:56.582 "trtype": "$TEST_TRANSPORT", 00:23:56.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.582 "adrfam": "ipv4", 00:23:56.582 "trsvcid": "$NVMF_PORT", 00:23:56.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.582 "hdgst": ${hdgst:-false}, 00:23:56.582 "ddgst": ${ddgst:-false} 00:23:56.582 }, 00:23:56.582 "method": "bdev_nvme_attach_controller" 00:23:56.582 } 00:23:56.582 EOF 00:23:56.582 )") 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:56.583 "params": { 00:23:56.583 "name": "Nvme0", 00:23:56.583 "trtype": "tcp", 00:23:56.583 "traddr": "10.0.0.2", 00:23:56.583 "adrfam": "ipv4", 00:23:56.583 "trsvcid": "4420", 00:23:56.583 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:56.583 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:56.583 "hdgst": false, 00:23:56.583 "ddgst": false 00:23:56.583 }, 00:23:56.583 "method": "bdev_nvme_attach_controller" 00:23:56.583 },{ 00:23:56.583 "params": { 00:23:56.583 "name": "Nvme1", 00:23:56.583 "trtype": "tcp", 00:23:56.583 "traddr": "10.0.0.2", 00:23:56.583 "adrfam": "ipv4", 00:23:56.583 "trsvcid": "4420", 00:23:56.583 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.583 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.583 "hdgst": false, 00:23:56.583 "ddgst": false 00:23:56.583 }, 00:23:56.583 "method": "bdev_nvme_attach_controller" 00:23:56.583 }' 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:56.583 22:18:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.583 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:56.583 ... 00:23:56.583 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:56.583 ... 00:23:56.583 fio-3.35 00:23:56.583 Starting 4 threads 00:24:01.847 00:24:01.847 filename0: (groupid=0, jobs=1): err= 0: pid=98365: Mon Jul 15 22:18:47 2024 00:24:01.847 read: IOPS=1824, BW=14.3MiB/s (14.9MB/s)(71.3MiB/5003msec) 00:24:01.847 slat (nsec): min=7839, max=49139, avg=13203.94, stdev=4783.28 00:24:01.847 clat (usec): min=2565, max=10359, avg=4324.80, stdev=459.17 00:24:01.847 lat (usec): min=2573, max=10395, avg=4338.01, stdev=458.73 00:24:01.847 clat percentiles (usec): 00:24:01.847 | 1.00th=[ 3982], 5.00th=[ 4080], 10.00th=[ 4113], 20.00th=[ 4146], 00:24:01.847 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4178], 60.00th=[ 4228], 00:24:01.847 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 5080], 95.00th=[ 5276], 00:24:01.847 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 8160], 99.95th=[ 9372], 00:24:01.847 | 99.99th=[10421] 00:24:01.847 bw ( KiB/s): min=13056, max=15104, per=24.92%, avg=14535.11, stdev=812.35, samples=9 00:24:01.847 iops : min= 1632, max= 1888, avg=1816.89, stdev=101.54, samples=9 00:24:01.847 lat (msec) : 4=1.08%, 10=98.90%, 20=0.01% 00:24:01.847 cpu : usr=92.96%, sys=5.82%, ctx=10, majf=0, minf=0 00:24:01.847 IO depths : 1=11.4%, 2=25.0%, 4=50.0%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:01.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.847 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.847 issued rwts: total=9128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.847 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:01.847 filename0: (groupid=0, jobs=1): err= 0: pid=98366: Mon Jul 15 22:18:47 2024 00:24:01.847 read: IOPS=1824, BW=14.3MiB/s (14.9MB/s)(71.3MiB/5004msec) 00:24:01.847 slat (nsec): min=5018, max=67040, avg=9782.53, stdev=4097.50 00:24:01.847 clat (usec): min=2208, max=9379, avg=4335.59, stdev=437.94 00:24:01.847 lat (usec): min=2221, max=9390, avg=4345.37, stdev=437.86 00:24:01.847 clat percentiles (usec): 00:24:01.847 | 1.00th=[ 4015], 5.00th=[ 4080], 10.00th=[ 4113], 20.00th=[ 4146], 00:24:01.847 | 30.00th=[ 4178], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4228], 00:24:01.847 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 5080], 95.00th=[ 5276], 00:24:01.847 | 99.00th=[ 5997], 99.50th=[ 6128], 99.90th=[ 7767], 99.95th=[ 9372], 00:24:01.847 | 99.99th=[ 9372] 00:24:01.847 bw ( KiB/s): min=13184, max=15104, per=24.92%, avg=14535.11, stdev=802.20, samples=9 00:24:01.847 iops : min= 1648, max= 1888, avg=1816.89, stdev=100.28, samples=9 00:24:01.847 lat (msec) : 4=0.80%, 10=99.20% 00:24:01.847 cpu : usr=93.54%, sys=5.20%, ctx=7, majf=0, minf=0 00:24:01.847 IO depths : 1=10.4%, 2=25.0%, 4=50.0%, 8=14.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:01.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.847 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.847 issued rwts: total=9128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.847 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:01.847 filename1: (groupid=0, jobs=1): err= 0: pid=98367: Mon Jul 15 22:18:47 2024 00:24:01.847 read: IOPS=1823, BW=14.2MiB/s (14.9MB/s)(71.2MiB/5002msec) 00:24:01.847 slat (nsec): min=7882, max=49650, avg=16609.28, stdev=3927.99 00:24:01.847 clat (usec): min=3164, max=10284, avg=4307.01, stdev=452.17 00:24:01.847 lat (usec): min=3184, max=10300, avg=4323.62, stdev=452.01 00:24:01.847 clat percentiles (usec): 00:24:01.847 | 1.00th=[ 4015], 5.00th=[ 4080], 10.00th=[ 4080], 20.00th=[ 4113], 00:24:01.847 | 30.00th=[ 4146], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4178], 00:24:01.847 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 5014], 95.00th=[ 5276], 00:24:01.847 | 99.00th=[ 5997], 99.50th=[ 6194], 99.90th=[ 8094], 99.95th=[ 9372], 00:24:01.847 | 99.99th=[10290] 00:24:01.847 bw ( KiB/s): min=13056, max=15104, per=24.90%, avg=14524.22, stdev=804.39, samples=9 00:24:01.848 iops : min= 1632, max= 1888, avg=1815.44, stdev=100.49, samples=9 00:24:01.848 lat (msec) : 4=0.81%, 10=99.18%, 20=0.01% 00:24:01.848 cpu : usr=94.18%, sys=4.62%, ctx=9, majf=0, minf=0 00:24:01.848 IO depths : 1=11.6%, 2=25.0%, 4=50.0%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:01.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.848 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.848 issued rwts: total=9120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.848 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:01.848 filename1: (groupid=0, jobs=1): err= 0: pid=98368: Mon Jul 15 22:18:47 2024 00:24:01.848 read: IOPS=1820, BW=14.2MiB/s (14.9MB/s)(71.1MiB/5001msec) 00:24:01.848 slat (usec): min=7, max=544, avg=15.91, stdev= 6.83 00:24:01.848 clat (usec): min=2120, max=12088, avg=4316.18, stdev=540.47 00:24:01.848 lat (usec): min=2140, max=12120, avg=4332.09, stdev=540.32 00:24:01.848 clat percentiles (usec): 00:24:01.848 | 1.00th=[ 3392], 5.00th=[ 4080], 10.00th=[ 4080], 20.00th=[ 4113], 00:24:01.848 | 30.00th=[ 4146], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4178], 00:24:01.848 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 5014], 95.00th=[ 5276], 00:24:01.848 | 99.00th=[ 6128], 99.50th=[ 7111], 99.90th=[10814], 99.95th=[11207], 00:24:01.848 | 99.99th=[12125] 00:24:01.848 bw ( KiB/s): min=13168, max=15104, per=24.85%, avg=14492.44, stdev=753.31, samples=9 00:24:01.848 iops : min= 1646, max= 1888, avg=1811.56, stdev=94.16, samples=9 00:24:01.848 lat (msec) : 4=1.19%, 10=98.69%, 20=0.12% 00:24:01.848 cpu : usr=93.32%, sys=5.44%, ctx=9, majf=0, minf=9 00:24:01.848 IO depths : 1=11.0%, 2=25.0%, 4=50.0%, 8=14.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:01.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.848 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.848 issued rwts: total=9104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.848 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:01.848 00:24:01.848 Run status group 0 (all jobs): 00:24:01.848 READ: bw=57.0MiB/s (59.7MB/s), 14.2MiB/s-14.3MiB/s (14.9MB/s-14.9MB/s), io=285MiB (299MB), run=5001-5004msec 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.848 ************************************ 00:24:01.848 END TEST fio_dif_rand_params 00:24:01.848 ************************************ 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.848 00:24:01.848 real 0m33.623s 00:24:01.848 user 3m39.649s 00:24:01.848 sys 0m6.581s 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:01.848 22:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:01.848 22:18:48 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:24:01.848 22:18:48 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:01.848 22:18:48 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:01.848 22:18:48 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.848 22:18:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:01.848 ************************************ 00:24:01.848 START TEST fio_dif_digest 00:24:01.848 ************************************ 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:01.848 bdev_null0 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:01.848 [2024-07-15 22:18:48.272998] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.848 { 00:24:01.848 "params": { 00:24:01.848 "name": "Nvme$subsystem", 00:24:01.848 "trtype": "$TEST_TRANSPORT", 00:24:01.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.848 "adrfam": "ipv4", 00:24:01.848 "trsvcid": "$NVMF_PORT", 00:24:01.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.848 "hdgst": ${hdgst:-false}, 00:24:01.848 "ddgst": ${ddgst:-false} 00:24:01.848 }, 00:24:01.848 "method": "bdev_nvme_attach_controller" 00:24:01.848 } 00:24:01.848 EOF 00:24:01.848 )") 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:24:01.848 22:18:48 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:01.848 "params": { 00:24:01.848 "name": "Nvme0", 00:24:01.848 "trtype": "tcp", 00:24:01.848 "traddr": "10.0.0.2", 00:24:01.848 "adrfam": "ipv4", 00:24:01.849 "trsvcid": "4420", 00:24:01.849 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:01.849 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:01.849 "hdgst": true, 00:24:01.849 "ddgst": true 00:24:01.849 }, 00:24:01.849 "method": "bdev_nvme_attach_controller" 00:24:01.849 }' 00:24:01.849 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:01.849 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:01.849 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:01.849 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:01.849 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:01.849 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:01.849 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:01.849 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:01.849 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:01.849 22:18:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:01.849 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:01.849 ... 00:24:01.849 fio-3.35 00:24:01.849 Starting 3 threads 00:24:14.044 00:24:14.044 filename0: (groupid=0, jobs=1): err= 0: pid=98470: Mon Jul 15 22:18:58 2024 00:24:14.044 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(290MiB/10044msec) 00:24:14.044 slat (nsec): min=7430, max=51545, avg=14433.43, stdev=3307.23 00:24:14.044 clat (usec): min=10037, max=54149, avg=12941.43, stdev=2464.04 00:24:14.044 lat (usec): min=10051, max=54161, avg=12955.86, stdev=2464.04 00:24:14.044 clat percentiles (usec): 00:24:14.044 | 1.00th=[10945], 5.00th=[11469], 10.00th=[11863], 20.00th=[12125], 00:24:14.044 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:24:14.044 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13698], 95.00th=[14091], 00:24:14.044 | 99.00th=[15795], 99.50th=[17171], 99.90th=[53740], 99.95th=[53740], 00:24:14.044 | 99.99th=[54264] 00:24:14.044 bw ( KiB/s): min=27392, max=30720, per=38.13%, avg=29693.15, stdev=1000.61, samples=20 00:24:14.044 iops : min= 214, max= 240, avg=231.95, stdev= 7.86, samples=20 00:24:14.044 lat (msec) : 20=99.66%, 50=0.04%, 100=0.30% 00:24:14.044 cpu : usr=92.25%, sys=6.28%, ctx=5, majf=0, minf=0 00:24:14.044 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.044 issued rwts: total=2322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.044 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:14.044 filename0: (groupid=0, jobs=1): err= 0: pid=98471: Mon Jul 15 22:18:58 2024 00:24:14.044 read: IOPS=165, BW=20.7MiB/s (21.7MB/s)(207MiB/10002msec) 00:24:14.044 slat (nsec): min=8171, max=51816, avg=13885.31, stdev=4412.15 00:24:14.044 clat (usec): min=9602, max=26370, avg=18107.51, stdev=1233.97 00:24:14.044 lat (usec): min=9618, max=26394, avg=18121.39, stdev=1234.50 00:24:14.044 clat percentiles (usec): 00:24:14.044 | 1.00th=[12780], 5.00th=[16581], 10.00th=[16909], 20.00th=[17433], 00:24:14.044 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18220], 00:24:14.044 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19268], 95.00th=[19530], 00:24:14.044 | 99.00th=[22152], 99.50th=[22676], 99.90th=[26346], 99.95th=[26346], 00:24:14.044 | 99.99th=[26346] 00:24:14.044 bw ( KiB/s): min=20008, max=22016, per=27.18%, avg=21169.26, stdev=507.23, samples=19 00:24:14.044 iops : min= 156, max= 172, avg=165.37, stdev= 4.00, samples=19 00:24:14.044 lat (msec) : 10=0.06%, 20=96.98%, 50=2.96% 00:24:14.044 cpu : usr=93.20%, sys=5.55%, ctx=5, majf=0, minf=9 00:24:14.044 IO depths : 1=9.9%, 2=90.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.044 issued rwts: total=1655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.044 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:14.044 filename0: (groupid=0, jobs=1): err= 0: pid=98472: Mon Jul 15 22:18:58 2024 00:24:14.044 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(267MiB/10003msec) 00:24:14.044 slat (nsec): min=7259, max=46709, avg=13585.92, stdev=3435.29 00:24:14.044 clat (usec): min=7604, max=19736, avg=14042.35, stdev=1201.16 00:24:14.044 lat (usec): min=7616, max=19750, avg=14055.94, stdev=1201.18 00:24:14.044 clat percentiles (usec): 00:24:14.044 | 1.00th=[ 9765], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:24:14.044 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:24:14.044 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15401], 95.00th=[15795], 00:24:14.044 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19268], 99.95th=[19792], 00:24:14.044 | 99.99th=[19792] 00:24:14.044 bw ( KiB/s): min=25293, max=28928, per=35.01%, avg=27268.05, stdev=839.94, samples=19 00:24:14.044 iops : min= 197, max= 226, avg=213.00, stdev= 6.64, samples=19 00:24:14.044 lat (msec) : 10=1.03%, 20=98.97% 00:24:14.044 cpu : usr=92.33%, sys=6.29%, ctx=13, majf=0, minf=9 00:24:14.044 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.044 issued rwts: total=2134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.044 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:14.044 00:24:14.044 Run status group 0 (all jobs): 00:24:14.044 READ: bw=76.1MiB/s (79.7MB/s), 20.7MiB/s-28.9MiB/s (21.7MB/s-30.3MB/s), io=764MiB (801MB), run=10002-10044msec 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:14.044 ************************************ 00:24:14.044 END TEST fio_dif_digest 00:24:14.044 ************************************ 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.044 00:24:14.044 real 0m10.904s 00:24:14.044 user 0m28.445s 00:24:14.044 sys 0m2.046s 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:14.044 22:18:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:14.044 22:18:59 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:24:14.044 22:18:59 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:14.044 22:18:59 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:14.044 22:18:59 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:14.045 rmmod nvme_tcp 00:24:14.045 rmmod nvme_fabrics 00:24:14.045 rmmod nvme_keyring 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97648 ']' 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97648 00:24:14.045 22:18:59 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 97648 ']' 00:24:14.045 22:18:59 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 97648 00:24:14.045 22:18:59 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:24:14.045 22:18:59 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:14.045 22:18:59 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97648 00:24:14.045 killing process with pid 97648 00:24:14.045 22:18:59 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:14.045 22:18:59 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:14.045 22:18:59 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97648' 00:24:14.045 22:18:59 nvmf_dif -- common/autotest_common.sh@967 -- # kill 97648 00:24:14.045 22:18:59 nvmf_dif -- common/autotest_common.sh@972 -- # wait 97648 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:14.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:14.045 Waiting for block devices as requested 00:24:14.045 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:14.045 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.045 22:18:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:14.045 22:18:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.045 22:18:59 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:14.045 ************************************ 00:24:14.045 END TEST nvmf_dif 00:24:14.045 ************************************ 00:24:14.045 00:24:14.045 real 1m8.741s 00:24:14.045 user 5m27.548s 00:24:14.045 sys 0m16.615s 00:24:14.045 22:18:59 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:14.045 22:18:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:14.045 22:19:00 -- common/autotest_common.sh@1142 -- # return 0 00:24:14.045 22:19:00 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:14.045 22:19:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:14.045 22:19:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.045 22:19:00 -- common/autotest_common.sh@10 -- # set +x 00:24:14.045 ************************************ 00:24:14.045 START TEST nvmf_abort_qd_sizes 00:24:14.045 ************************************ 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:14.045 * Looking for test storage... 00:24:14.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:14.045 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:14.046 Cannot find device "nvmf_tgt_br" 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:14.046 Cannot find device "nvmf_tgt_br2" 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:14.046 Cannot find device "nvmf_tgt_br" 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:14.046 Cannot find device "nvmf_tgt_br2" 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:14.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:14.046 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:14.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:24:14.046 00:24:14.046 --- 10.0.0.2 ping statistics --- 00:24:14.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.046 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:14.046 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:14.046 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:24:14.046 00:24:14.046 --- 10.0.0.3 ping statistics --- 00:24:14.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.046 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:14.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:24:14.046 00:24:14.046 --- 10.0.0.1 ping statistics --- 00:24:14.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.046 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:24:14.046 22:19:00 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:14.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:14.305 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:14.305 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=99058 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 99058 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 99058 ']' 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:14.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:14.564 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:14.564 [2024-07-15 22:19:01.363854] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:24:14.564 [2024-07-15 22:19:01.363974] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.564 [2024-07-15 22:19:01.501066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.823 [2024-07-15 22:19:01.562253] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.823 [2024-07-15 22:19:01.562510] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.823 [2024-07-15 22:19:01.562654] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.823 [2024-07-15 22:19:01.562857] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.823 [2024-07-15 22:19:01.562954] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.823 [2024-07-15 22:19:01.563167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.823 [2024-07-15 22:19:01.563246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.823 [2024-07-15 22:19:01.563301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.823 [2024-07-15 22:19:01.563304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:24:14.823 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.824 22:19:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:14.824 ************************************ 00:24:14.824 START TEST spdk_target_abort 00:24:14.824 ************************************ 00:24:14.824 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:24:14.824 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:14.824 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:14.824 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.824 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:15.083 spdk_targetn1 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:15.083 [2024-07-15 22:19:01.816729] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:15.083 [2024-07-15 22:19:01.844890] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:15.083 22:19:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:18.364 Initializing NVMe Controllers 00:24:18.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:18.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:18.364 Initialization complete. Launching workers. 00:24:18.364 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10907, failed: 0 00:24:18.364 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1067, failed to submit 9840 00:24:18.364 success 785, unsuccess 282, failed 0 00:24:18.364 22:19:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:18.364 22:19:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:21.672 Initializing NVMe Controllers 00:24:21.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:21.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:21.672 Initialization complete. Launching workers. 00:24:21.672 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5960, failed: 0 00:24:21.672 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1285, failed to submit 4675 00:24:21.672 success 229, unsuccess 1056, failed 0 00:24:21.672 22:19:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:21.672 22:19:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:24.952 Initializing NVMe Controllers 00:24:24.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:24.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:24.952 Initialization complete. Launching workers. 00:24:24.952 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29488, failed: 0 00:24:24.952 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2591, failed to submit 26897 00:24:24.952 success 358, unsuccess 2233, failed 0 00:24:24.952 22:19:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:24.952 22:19:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.952 22:19:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:24.952 22:19:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.952 22:19:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:24.952 22:19:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.952 22:19:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:25.888 22:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.888 22:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99058 00:24:25.888 22:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 99058 ']' 00:24:25.888 22:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 99058 00:24:25.888 22:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:24:25.888 22:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.888 22:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99058 00:24:25.888 killing process with pid 99058 00:24:25.888 22:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:25.888 22:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:25.888 22:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99058' 00:24:25.888 22:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 99058 00:24:25.888 22:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 99058 00:24:25.888 00:24:25.888 real 0m11.112s 00:24:25.888 user 0m42.055s 00:24:25.888 sys 0m1.734s 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:26.145 ************************************ 00:24:26.145 END TEST spdk_target_abort 00:24:26.145 ************************************ 00:24:26.145 22:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:24:26.145 22:19:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:26.145 22:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:26.145 22:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:26.145 22:19:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:26.145 ************************************ 00:24:26.145 START TEST kernel_target_abort 00:24:26.145 ************************************ 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:26.145 22:19:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:26.403 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:26.403 Waiting for block devices as requested 00:24:26.403 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:26.660 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:26.660 No valid GPT data, bailing 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:26.660 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:26.660 No valid GPT data, bailing 00:24:26.944 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:26.945 No valid GPT data, bailing 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:26.945 No valid GPT data, bailing 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 --hostid=ff65e169-209e-4b79-b82d-da213c413a29 -a 10.0.0.1 -t tcp -s 4420 00:24:26.945 00:24:26.945 Discovery Log Number of Records 2, Generation counter 2 00:24:26.945 =====Discovery Log Entry 0====== 00:24:26.945 trtype: tcp 00:24:26.945 adrfam: ipv4 00:24:26.945 subtype: current discovery subsystem 00:24:26.945 treq: not specified, sq flow control disable supported 00:24:26.945 portid: 1 00:24:26.945 trsvcid: 4420 00:24:26.945 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:26.945 traddr: 10.0.0.1 00:24:26.945 eflags: none 00:24:26.945 sectype: none 00:24:26.945 =====Discovery Log Entry 1====== 00:24:26.945 trtype: tcp 00:24:26.945 adrfam: ipv4 00:24:26.945 subtype: nvme subsystem 00:24:26.945 treq: not specified, sq flow control disable supported 00:24:26.945 portid: 1 00:24:26.945 trsvcid: 4420 00:24:26.945 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:26.945 traddr: 10.0.0.1 00:24:26.945 eflags: none 00:24:26.945 sectype: none 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:26.945 22:19:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:30.227 Initializing NVMe Controllers 00:24:30.227 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:30.227 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:30.227 Initialization complete. Launching workers. 00:24:30.227 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33753, failed: 0 00:24:30.227 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33753, failed to submit 0 00:24:30.227 success 0, unsuccess 33753, failed 0 00:24:30.227 22:19:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:30.227 22:19:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:33.512 Initializing NVMe Controllers 00:24:33.512 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:33.512 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:33.512 Initialization complete. Launching workers. 00:24:33.512 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64441, failed: 0 00:24:33.512 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27768, failed to submit 36673 00:24:33.512 success 0, unsuccess 27768, failed 0 00:24:33.512 22:19:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:33.512 22:19:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:36.791 Initializing NVMe Controllers 00:24:36.791 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:36.791 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:36.791 Initialization complete. Launching workers. 00:24:36.791 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77786, failed: 0 00:24:36.791 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19410, failed to submit 58376 00:24:36.791 success 0, unsuccess 19410, failed 0 00:24:36.791 22:19:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:36.791 22:19:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:36.791 22:19:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:24:36.791 22:19:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:36.791 22:19:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:36.791 22:19:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:36.791 22:19:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:36.791 22:19:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:36.791 22:19:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:36.791 22:19:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:37.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:39.249 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:39.249 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:39.249 ************************************ 00:24:39.249 END TEST kernel_target_abort 00:24:39.249 ************************************ 00:24:39.249 00:24:39.249 real 0m12.936s 00:24:39.249 user 0m6.451s 00:24:39.249 sys 0m3.993s 00:24:39.249 22:19:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:39.249 22:19:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:39.249 rmmod nvme_tcp 00:24:39.249 rmmod nvme_fabrics 00:24:39.249 rmmod nvme_keyring 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:39.249 Process with pid 99058 is not found 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 99058 ']' 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 99058 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 99058 ']' 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 99058 00:24:39.249 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (99058) - No such process 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 99058 is not found' 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:39.249 22:19:25 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:39.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:39.507 Waiting for block devices as requested 00:24:39.507 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:39.507 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:39.766 22:19:26 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:39.766 22:19:26 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:39.766 22:19:26 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:39.766 22:19:26 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:39.766 22:19:26 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.766 22:19:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:39.766 22:19:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.766 22:19:26 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:39.766 00:24:39.766 real 0m26.493s 00:24:39.766 user 0m49.482s 00:24:39.766 sys 0m6.978s 00:24:39.766 22:19:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:39.766 22:19:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:39.766 ************************************ 00:24:39.766 END TEST nvmf_abort_qd_sizes 00:24:39.766 ************************************ 00:24:39.766 22:19:26 -- common/autotest_common.sh@1142 -- # return 0 00:24:39.766 22:19:26 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:39.766 22:19:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:39.766 22:19:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:39.766 22:19:26 -- common/autotest_common.sh@10 -- # set +x 00:24:39.766 ************************************ 00:24:39.766 START TEST keyring_file 00:24:39.766 ************************************ 00:24:39.766 22:19:26 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:39.766 * Looking for test storage... 00:24:39.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:39.766 22:19:26 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:39.766 22:19:26 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:39.766 22:19:26 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.766 22:19:26 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.766 22:19:26 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.766 22:19:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.766 22:19:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.766 22:19:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.766 22:19:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:39.766 22:19:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@47 -- # : 0 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.766 22:19:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:39.766 22:19:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:39.766 22:19:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:39.766 22:19:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:39.766 22:19:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:39.766 22:19:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:39.766 22:19:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:39.766 22:19:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:39.766 22:19:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:39.766 22:19:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:39.766 22:19:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:39.766 22:19:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:39.766 22:19:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nEBae1yytI 00:24:39.766 22:19:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:39.766 22:19:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:40.025 22:19:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nEBae1yytI 00:24:40.025 22:19:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nEBae1yytI 00:24:40.025 22:19:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.nEBae1yytI 00:24:40.025 22:19:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:40.025 22:19:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:40.025 22:19:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:40.025 22:19:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:40.025 22:19:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:40.025 22:19:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:40.025 22:19:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Efwwx5jSbA 00:24:40.025 22:19:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:40.025 22:19:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:40.025 22:19:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:40.025 22:19:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:40.025 22:19:26 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:40.025 22:19:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:40.025 22:19:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:40.025 22:19:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Efwwx5jSbA 00:24:40.025 22:19:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Efwwx5jSbA 00:24:40.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.025 22:19:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Efwwx5jSbA 00:24:40.025 22:19:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=99918 00:24:40.025 22:19:26 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:40.025 22:19:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99918 00:24:40.025 22:19:26 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99918 ']' 00:24:40.025 22:19:26 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.025 22:19:26 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:40.025 22:19:26 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.025 22:19:26 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:40.025 22:19:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:40.025 [2024-07-15 22:19:26.858387] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:24:40.025 [2024-07-15 22:19:26.858710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99918 ] 00:24:40.283 [2024-07-15 22:19:26.996200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.283 [2024-07-15 22:19:27.067405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:41.214 22:19:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:41.214 [2024-07-15 22:19:27.888241] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.214 null0 00:24:41.214 [2024-07-15 22:19:27.920258] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:41.214 [2024-07-15 22:19:27.920705] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:41.214 [2024-07-15 22:19:27.928208] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.214 22:19:27 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:41.214 [2024-07-15 22:19:27.940197] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:41.214 request: 00:24:41.214 2024/07/15 22:19:27 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:24:41.214 { 00:24:41.214 "method": "nvmf_subsystem_add_listener", 00:24:41.214 "params": { 00:24:41.214 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:41.214 "secure_channel": false, 00:24:41.214 "listen_address": { 00:24:41.214 "trtype": "tcp", 00:24:41.214 "traddr": "127.0.0.1", 00:24:41.214 "trsvcid": "4420" 00:24:41.214 } 00:24:41.214 } 00:24:41.214 } 00:24:41.214 Got JSON-RPC error response 00:24:41.214 GoRPCClient: error on JSON-RPC call 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:41.214 22:19:27 keyring_file -- keyring/file.sh@46 -- # bperfpid=99953 00:24:41.214 22:19:27 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:41.214 22:19:27 keyring_file -- keyring/file.sh@48 -- # waitforlisten 99953 /var/tmp/bperf.sock 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99953 ']' 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:41.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.214 22:19:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:41.214 [2024-07-15 22:19:28.004133] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:24:41.214 [2024-07-15 22:19:28.004469] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99953 ] 00:24:41.214 [2024-07-15 22:19:28.140958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.471 [2024-07-15 22:19:28.210026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.034 22:19:28 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.034 22:19:28 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:42.034 22:19:28 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nEBae1yytI 00:24:42.034 22:19:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nEBae1yytI 00:24:42.599 22:19:29 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Efwwx5jSbA 00:24:42.599 22:19:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Efwwx5jSbA 00:24:42.878 22:19:29 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:24:42.878 22:19:29 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:24:42.878 22:19:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:42.878 22:19:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:42.878 22:19:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:43.154 22:19:29 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.nEBae1yytI == \/\t\m\p\/\t\m\p\.\n\E\B\a\e\1\y\y\t\I ]] 00:24:43.154 22:19:29 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:24:43.154 22:19:29 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:43.154 22:19:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:43.154 22:19:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:43.154 22:19:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:43.412 22:19:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Efwwx5jSbA == \/\t\m\p\/\t\m\p\.\E\f\w\w\x\5\j\S\b\A ]] 00:24:43.412 22:19:30 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:24:43.412 22:19:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:43.412 22:19:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:43.412 22:19:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:43.412 22:19:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:43.412 22:19:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:43.710 22:19:30 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:24:43.710 22:19:30 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:24:43.710 22:19:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:43.710 22:19:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:43.710 22:19:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:43.710 22:19:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:43.710 22:19:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:43.984 22:19:30 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:43.984 22:19:30 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:43.984 22:19:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:44.241 [2024-07-15 22:19:31.059648] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:44.241 nvme0n1 00:24:44.241 22:19:31 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:24:44.241 22:19:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:44.241 22:19:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:44.241 22:19:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:44.241 22:19:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:44.241 22:19:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:44.498 22:19:31 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:24:44.498 22:19:31 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:24:44.498 22:19:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:44.498 22:19:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:44.498 22:19:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:44.498 22:19:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:44.498 22:19:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:44.754 22:19:31 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:24:44.754 22:19:31 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:45.065 Running I/O for 1 seconds... 00:24:45.997 00:24:45.997 Latency(us) 00:24:45.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.997 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:45.997 nvme0n1 : 1.01 10968.53 42.85 0.00 0.00 11634.50 5391.83 23473.80 00:24:45.997 =================================================================================================================== 00:24:45.997 Total : 10968.53 42.85 0.00 0.00 11634.50 5391.83 23473.80 00:24:45.997 0 00:24:45.997 22:19:32 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:45.997 22:19:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:46.255 22:19:33 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:24:46.255 22:19:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:46.255 22:19:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:46.255 22:19:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:46.255 22:19:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:46.255 22:19:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:46.512 22:19:33 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:24:46.512 22:19:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:24:46.512 22:19:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:46.512 22:19:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:46.512 22:19:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:46.512 22:19:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:46.512 22:19:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:46.769 22:19:33 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:46.769 22:19:33 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:46.769 22:19:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:46.769 22:19:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:46.769 22:19:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:46.769 22:19:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.769 22:19:33 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:46.769 22:19:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.769 22:19:33 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:46.769 22:19:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:47.027 [2024-07-15 22:19:33.955763] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:47.027 [2024-07-15 22:19:33.955940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1577f30 (107): Transport endpoint is not connected 00:24:47.027 [2024-07-15 22:19:33.956930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1577f30 (9): Bad file descriptor 00:24:47.027 [2024-07-15 22:19:33.957926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:47.027 [2024-07-15 22:19:33.957950] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:47.027 [2024-07-15 22:19:33.957960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:47.027 2024/07/15 22:19:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:47.027 request: 00:24:47.027 { 00:24:47.027 "method": "bdev_nvme_attach_controller", 00:24:47.027 "params": { 00:24:47.027 "name": "nvme0", 00:24:47.027 "trtype": "tcp", 00:24:47.027 "traddr": "127.0.0.1", 00:24:47.027 "adrfam": "ipv4", 00:24:47.027 "trsvcid": "4420", 00:24:47.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:47.027 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:47.027 "prchk_reftag": false, 00:24:47.027 "prchk_guard": false, 00:24:47.027 "hdgst": false, 00:24:47.027 "ddgst": false, 00:24:47.027 "psk": "key1" 00:24:47.027 } 00:24:47.027 } 00:24:47.027 Got JSON-RPC error response 00:24:47.027 GoRPCClient: error on JSON-RPC call 00:24:47.285 22:19:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:47.285 22:19:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:47.285 22:19:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:47.285 22:19:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:47.285 22:19:33 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:24:47.285 22:19:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:47.285 22:19:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:47.285 22:19:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:47.285 22:19:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:47.285 22:19:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.285 22:19:34 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:24:47.543 22:19:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:24:47.543 22:19:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:47.543 22:19:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:47.543 22:19:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:47.543 22:19:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.543 22:19:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:47.800 22:19:34 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:47.800 22:19:34 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:24:47.800 22:19:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:48.058 22:19:34 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:24:48.058 22:19:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:48.315 22:19:35 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:24:48.315 22:19:35 keyring_file -- keyring/file.sh@77 -- # jq length 00:24:48.315 22:19:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:48.573 22:19:35 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:24:48.573 22:19:35 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.nEBae1yytI 00:24:48.573 22:19:35 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.nEBae1yytI 00:24:48.573 22:19:35 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:48.573 22:19:35 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.nEBae1yytI 00:24:48.573 22:19:35 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:48.573 22:19:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.573 22:19:35 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:48.573 22:19:35 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.573 22:19:35 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nEBae1yytI 00:24:48.573 22:19:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nEBae1yytI 00:24:48.831 [2024-07-15 22:19:35.585348] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nEBae1yytI': 0100660 00:24:48.831 [2024-07-15 22:19:35.585396] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:48.831 2024/07/15 22:19:35 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.nEBae1yytI], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:24:48.831 request: 00:24:48.831 { 00:24:48.831 "method": "keyring_file_add_key", 00:24:48.831 "params": { 00:24:48.831 "name": "key0", 00:24:48.831 "path": "/tmp/tmp.nEBae1yytI" 00:24:48.831 } 00:24:48.831 } 00:24:48.831 Got JSON-RPC error response 00:24:48.831 GoRPCClient: error on JSON-RPC call 00:24:48.831 22:19:35 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:48.831 22:19:35 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:48.831 22:19:35 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:48.831 22:19:35 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:48.831 22:19:35 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.nEBae1yytI 00:24:48.831 22:19:35 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nEBae1yytI 00:24:48.831 22:19:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nEBae1yytI 00:24:49.089 22:19:35 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.nEBae1yytI 00:24:49.089 22:19:35 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:24:49.089 22:19:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:49.089 22:19:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:49.089 22:19:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:49.089 22:19:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:49.089 22:19:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:49.347 22:19:36 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:24:49.347 22:19:36 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:49.347 22:19:36 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:49.347 22:19:36 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:49.347 22:19:36 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:49.347 22:19:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:49.347 22:19:36 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:49.347 22:19:36 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:49.347 22:19:36 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:49.347 22:19:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:49.605 [2024-07-15 22:19:36.473573] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.nEBae1yytI': No such file or directory 00:24:49.605 [2024-07-15 22:19:36.473622] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:49.605 [2024-07-15 22:19:36.473657] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:49.605 [2024-07-15 22:19:36.473671] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:49.605 [2024-07-15 22:19:36.473681] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:49.605 2024/07/15 22:19:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:24:49.605 request: 00:24:49.605 { 00:24:49.605 "method": "bdev_nvme_attach_controller", 00:24:49.605 "params": { 00:24:49.605 "name": "nvme0", 00:24:49.605 "trtype": "tcp", 00:24:49.605 "traddr": "127.0.0.1", 00:24:49.605 "adrfam": "ipv4", 00:24:49.605 "trsvcid": "4420", 00:24:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:49.605 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:49.605 "prchk_reftag": false, 00:24:49.605 "prchk_guard": false, 00:24:49.605 "hdgst": false, 00:24:49.605 "ddgst": false, 00:24:49.605 "psk": "key0" 00:24:49.605 } 00:24:49.605 } 00:24:49.605 Got JSON-RPC error response 00:24:49.605 GoRPCClient: error on JSON-RPC call 00:24:49.605 22:19:36 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:49.605 22:19:36 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:49.605 22:19:36 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:49.605 22:19:36 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:49.605 22:19:36 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:24:49.605 22:19:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:49.863 22:19:36 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:49.863 22:19:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:49.863 22:19:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:49.863 22:19:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:49.863 22:19:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:49.863 22:19:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:49.863 22:19:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pcp0lpkvqq 00:24:49.863 22:19:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:49.863 22:19:36 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:49.863 22:19:36 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:49.863 22:19:36 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:49.863 22:19:36 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:49.863 22:19:36 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:49.863 22:19:36 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:50.120 22:19:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pcp0lpkvqq 00:24:50.120 22:19:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pcp0lpkvqq 00:24:50.120 22:19:36 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.pcp0lpkvqq 00:24:50.120 22:19:36 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pcp0lpkvqq 00:24:50.120 22:19:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pcp0lpkvqq 00:24:50.416 22:19:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:50.416 22:19:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:50.673 nvme0n1 00:24:50.673 22:19:37 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:24:50.673 22:19:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:50.673 22:19:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:50.673 22:19:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:50.673 22:19:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:50.673 22:19:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:50.930 22:19:37 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:24:50.930 22:19:37 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:24:50.930 22:19:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:51.187 22:19:38 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:24:51.187 22:19:38 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:24:51.187 22:19:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:51.187 22:19:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:51.187 22:19:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:51.445 22:19:38 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:24:51.445 22:19:38 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:24:51.445 22:19:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:51.445 22:19:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:51.445 22:19:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:51.445 22:19:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:51.445 22:19:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:51.703 22:19:38 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:24:51.703 22:19:38 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:51.703 22:19:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:51.960 22:19:38 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:24:51.960 22:19:38 keyring_file -- keyring/file.sh@104 -- # jq length 00:24:51.960 22:19:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:52.218 22:19:39 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:24:52.218 22:19:39 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pcp0lpkvqq 00:24:52.218 22:19:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pcp0lpkvqq 00:24:52.782 22:19:39 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Efwwx5jSbA 00:24:52.782 22:19:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Efwwx5jSbA 00:24:52.782 22:19:39 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:52.782 22:19:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:53.347 nvme0n1 00:24:53.347 22:19:40 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:24:53.347 22:19:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:53.604 22:19:40 keyring_file -- keyring/file.sh@112 -- # config='{ 00:24:53.604 "subsystems": [ 00:24:53.604 { 00:24:53.604 "subsystem": "keyring", 00:24:53.604 "config": [ 00:24:53.604 { 00:24:53.604 "method": "keyring_file_add_key", 00:24:53.604 "params": { 00:24:53.604 "name": "key0", 00:24:53.604 "path": "/tmp/tmp.pcp0lpkvqq" 00:24:53.604 } 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "method": "keyring_file_add_key", 00:24:53.604 "params": { 00:24:53.604 "name": "key1", 00:24:53.604 "path": "/tmp/tmp.Efwwx5jSbA" 00:24:53.604 } 00:24:53.604 } 00:24:53.604 ] 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "subsystem": "iobuf", 00:24:53.604 "config": [ 00:24:53.604 { 00:24:53.604 "method": "iobuf_set_options", 00:24:53.604 "params": { 00:24:53.604 "large_bufsize": 135168, 00:24:53.604 "large_pool_count": 1024, 00:24:53.604 "small_bufsize": 8192, 00:24:53.604 "small_pool_count": 8192 00:24:53.604 } 00:24:53.604 } 00:24:53.604 ] 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "subsystem": "sock", 00:24:53.604 "config": [ 00:24:53.604 { 00:24:53.604 "method": "sock_set_default_impl", 00:24:53.604 "params": { 00:24:53.604 "impl_name": "posix" 00:24:53.604 } 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "method": "sock_impl_set_options", 00:24:53.604 "params": { 00:24:53.604 "enable_ktls": false, 00:24:53.604 "enable_placement_id": 0, 00:24:53.604 "enable_quickack": false, 00:24:53.604 "enable_recv_pipe": true, 00:24:53.604 "enable_zerocopy_send_client": false, 00:24:53.604 "enable_zerocopy_send_server": true, 00:24:53.604 "impl_name": "ssl", 00:24:53.604 "recv_buf_size": 4096, 00:24:53.604 "send_buf_size": 4096, 00:24:53.604 "tls_version": 0, 00:24:53.604 "zerocopy_threshold": 0 00:24:53.604 } 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "method": "sock_impl_set_options", 00:24:53.604 "params": { 00:24:53.604 "enable_ktls": false, 00:24:53.604 "enable_placement_id": 0, 00:24:53.604 "enable_quickack": false, 00:24:53.604 "enable_recv_pipe": true, 00:24:53.604 "enable_zerocopy_send_client": false, 00:24:53.604 "enable_zerocopy_send_server": true, 00:24:53.604 "impl_name": "posix", 00:24:53.604 "recv_buf_size": 2097152, 00:24:53.604 "send_buf_size": 2097152, 00:24:53.604 "tls_version": 0, 00:24:53.604 "zerocopy_threshold": 0 00:24:53.604 } 00:24:53.604 } 00:24:53.604 ] 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "subsystem": "vmd", 00:24:53.604 "config": [] 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "subsystem": "accel", 00:24:53.604 "config": [ 00:24:53.604 { 00:24:53.604 "method": "accel_set_options", 00:24:53.604 "params": { 00:24:53.604 "buf_count": 2048, 00:24:53.604 "large_cache_size": 16, 00:24:53.604 "sequence_count": 2048, 00:24:53.604 "small_cache_size": 128, 00:24:53.604 "task_count": 2048 00:24:53.604 } 00:24:53.604 } 00:24:53.604 ] 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "subsystem": "bdev", 00:24:53.604 "config": [ 00:24:53.604 { 00:24:53.604 "method": "bdev_set_options", 00:24:53.604 "params": { 00:24:53.604 "bdev_auto_examine": true, 00:24:53.604 "bdev_io_cache_size": 256, 00:24:53.604 "bdev_io_pool_size": 65535, 00:24:53.604 "iobuf_large_cache_size": 16, 00:24:53.604 "iobuf_small_cache_size": 128 00:24:53.604 } 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "method": "bdev_raid_set_options", 00:24:53.604 "params": { 00:24:53.604 "process_window_size_kb": 1024 00:24:53.604 } 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "method": "bdev_iscsi_set_options", 00:24:53.604 "params": { 00:24:53.604 "timeout_sec": 30 00:24:53.604 } 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "method": "bdev_nvme_set_options", 00:24:53.604 "params": { 00:24:53.604 "action_on_timeout": "none", 00:24:53.604 "allow_accel_sequence": false, 00:24:53.604 "arbitration_burst": 0, 00:24:53.604 "bdev_retry_count": 3, 00:24:53.604 "ctrlr_loss_timeout_sec": 0, 00:24:53.604 "delay_cmd_submit": true, 00:24:53.604 "dhchap_dhgroups": [ 00:24:53.604 "null", 00:24:53.604 "ffdhe2048", 00:24:53.604 "ffdhe3072", 00:24:53.604 "ffdhe4096", 00:24:53.604 "ffdhe6144", 00:24:53.604 "ffdhe8192" 00:24:53.604 ], 00:24:53.604 "dhchap_digests": [ 00:24:53.604 "sha256", 00:24:53.604 "sha384", 00:24:53.604 "sha512" 00:24:53.604 ], 00:24:53.604 "disable_auto_failback": false, 00:24:53.604 "fast_io_fail_timeout_sec": 0, 00:24:53.604 "generate_uuids": false, 00:24:53.604 "high_priority_weight": 0, 00:24:53.604 "io_path_stat": false, 00:24:53.604 "io_queue_requests": 512, 00:24:53.604 "keep_alive_timeout_ms": 10000, 00:24:53.604 "low_priority_weight": 0, 00:24:53.604 "medium_priority_weight": 0, 00:24:53.604 "nvme_adminq_poll_period_us": 10000, 00:24:53.604 "nvme_error_stat": false, 00:24:53.604 "nvme_ioq_poll_period_us": 0, 00:24:53.604 "rdma_cm_event_timeout_ms": 0, 00:24:53.604 "rdma_max_cq_size": 0, 00:24:53.604 "rdma_srq_size": 0, 00:24:53.604 "reconnect_delay_sec": 0, 00:24:53.604 "timeout_admin_us": 0, 00:24:53.604 "timeout_us": 0, 00:24:53.604 "transport_ack_timeout": 0, 00:24:53.604 "transport_retry_count": 4, 00:24:53.604 "transport_tos": 0 00:24:53.604 } 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "method": "bdev_nvme_attach_controller", 00:24:53.604 "params": { 00:24:53.604 "adrfam": "IPv4", 00:24:53.604 "ctrlr_loss_timeout_sec": 0, 00:24:53.604 "ddgst": false, 00:24:53.604 "fast_io_fail_timeout_sec": 0, 00:24:53.604 "hdgst": false, 00:24:53.604 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:53.604 "name": "nvme0", 00:24:53.604 "prchk_guard": false, 00:24:53.604 "prchk_reftag": false, 00:24:53.604 "psk": "key0", 00:24:53.604 "reconnect_delay_sec": 0, 00:24:53.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:53.604 "traddr": "127.0.0.1", 00:24:53.604 "trsvcid": "4420", 00:24:53.604 "trtype": "TCP" 00:24:53.604 } 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "method": "bdev_nvme_set_hotplug", 00:24:53.604 "params": { 00:24:53.604 "enable": false, 00:24:53.604 "period_us": 100000 00:24:53.604 } 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "method": "bdev_wait_for_examine" 00:24:53.604 } 00:24:53.604 ] 00:24:53.604 }, 00:24:53.604 { 00:24:53.604 "subsystem": "nbd", 00:24:53.604 "config": [] 00:24:53.604 } 00:24:53.604 ] 00:24:53.604 }' 00:24:53.604 22:19:40 keyring_file -- keyring/file.sh@114 -- # killprocess 99953 00:24:53.604 22:19:40 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99953 ']' 00:24:53.604 22:19:40 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99953 00:24:53.604 22:19:40 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:53.604 22:19:40 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:53.604 22:19:40 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99953 00:24:53.604 killing process with pid 99953 00:24:53.604 Received shutdown signal, test time was about 1.000000 seconds 00:24:53.604 00:24:53.604 Latency(us) 00:24:53.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.604 =================================================================================================================== 00:24:53.604 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:53.604 22:19:40 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:53.604 22:19:40 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:53.604 22:19:40 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99953' 00:24:53.604 22:19:40 keyring_file -- common/autotest_common.sh@967 -- # kill 99953 00:24:53.604 22:19:40 keyring_file -- common/autotest_common.sh@972 -- # wait 99953 00:24:53.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:53.862 22:19:40 keyring_file -- keyring/file.sh@117 -- # bperfpid=100431 00:24:53.862 22:19:40 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100431 /var/tmp/bperf.sock 00:24:53.862 22:19:40 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100431 ']' 00:24:53.862 22:19:40 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:53.862 22:19:40 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:53.862 22:19:40 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.862 22:19:40 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:53.862 22:19:40 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:24:53.862 "subsystems": [ 00:24:53.862 { 00:24:53.862 "subsystem": "keyring", 00:24:53.862 "config": [ 00:24:53.862 { 00:24:53.862 "method": "keyring_file_add_key", 00:24:53.862 "params": { 00:24:53.862 "name": "key0", 00:24:53.862 "path": "/tmp/tmp.pcp0lpkvqq" 00:24:53.862 } 00:24:53.862 }, 00:24:53.862 { 00:24:53.862 "method": "keyring_file_add_key", 00:24:53.862 "params": { 00:24:53.862 "name": "key1", 00:24:53.862 "path": "/tmp/tmp.Efwwx5jSbA" 00:24:53.862 } 00:24:53.862 } 00:24:53.862 ] 00:24:53.862 }, 00:24:53.862 { 00:24:53.862 "subsystem": "iobuf", 00:24:53.862 "config": [ 00:24:53.862 { 00:24:53.862 "method": "iobuf_set_options", 00:24:53.862 "params": { 00:24:53.862 "large_bufsize": 135168, 00:24:53.862 "large_pool_count": 1024, 00:24:53.862 "small_bufsize": 8192, 00:24:53.862 "small_pool_count": 8192 00:24:53.862 } 00:24:53.862 } 00:24:53.862 ] 00:24:53.862 }, 00:24:53.862 { 00:24:53.862 "subsystem": "sock", 00:24:53.862 "config": [ 00:24:53.862 { 00:24:53.862 "method": "sock_set_default_impl", 00:24:53.862 "params": { 00:24:53.862 "impl_name": "posix" 00:24:53.862 } 00:24:53.862 }, 00:24:53.862 { 00:24:53.862 "method": "sock_impl_set_options", 00:24:53.862 "params": { 00:24:53.862 "enable_ktls": false, 00:24:53.862 "enable_placement_id": 0, 00:24:53.862 "enable_quickack": false, 00:24:53.862 "enable_recv_pipe": true, 00:24:53.862 "enable_zerocopy_send_client": false, 00:24:53.862 "enable_zerocopy_send_server": true, 00:24:53.862 "impl_name": "ssl", 00:24:53.862 "recv_buf_size": 4096, 00:24:53.862 "send_buf_size": 4096, 00:24:53.862 "tls_version": 0, 00:24:53.862 "zerocopy_threshold": 0 00:24:53.862 } 00:24:53.862 }, 00:24:53.862 { 00:24:53.862 "method": "sock_impl_set_options", 00:24:53.862 "params": { 00:24:53.862 "enable_ktls": false, 00:24:53.862 "enable_placement_id": 0, 00:24:53.862 "enable_quickack": false, 00:24:53.862 "enable_recv_pipe": true, 00:24:53.862 "enable_zerocopy_send_client": false, 00:24:53.862 "enable_zerocopy_send_server": true, 00:24:53.862 "impl_name": "posix", 00:24:53.862 "recv_buf_size": 2097152, 00:24:53.862 "send_buf_size": 2097152, 00:24:53.862 "tls_version": 0, 00:24:53.862 "zerocopy_threshold": 0 00:24:53.862 } 00:24:53.862 } 00:24:53.862 ] 00:24:53.862 }, 00:24:53.862 { 00:24:53.862 "subsystem": "vmd", 00:24:53.862 "config": [] 00:24:53.862 }, 00:24:53.862 { 00:24:53.862 "subsystem": "accel", 00:24:53.862 "config": [ 00:24:53.862 { 00:24:53.862 "method": "accel_set_options", 00:24:53.862 "params": { 00:24:53.862 "buf_count": 2048, 00:24:53.862 "large_cache_size": 16, 00:24:53.862 "sequence_count": 2048, 00:24:53.862 "small_cache_size": 128, 00:24:53.862 "task_count": 2048 00:24:53.862 } 00:24:53.862 } 00:24:53.862 ] 00:24:53.862 }, 00:24:53.862 { 00:24:53.862 "subsystem": "bdev", 00:24:53.862 "config": [ 00:24:53.862 { 00:24:53.862 "method": "bdev_set_options", 00:24:53.862 "params": { 00:24:53.862 "bdev_auto_examine": true, 00:24:53.862 "bdev_io_cache_size": 256, 00:24:53.862 "bdev_io_pool_size": 65535, 00:24:53.862 "iobuf_large_cache_size": 16, 00:24:53.862 "iobuf_small_cache_size": 128 00:24:53.862 } 00:24:53.862 }, 00:24:53.862 { 00:24:53.862 "method": "bdev_raid_set_options", 00:24:53.862 "params": { 00:24:53.862 "process_window_size_kb": 1024 00:24:53.862 } 00:24:53.862 }, 00:24:53.862 { 00:24:53.862 "method": "bdev_iscsi_set_options", 00:24:53.862 "params": { 00:24:53.862 "timeout_sec": 30 00:24:53.862 } 00:24:53.862 }, 00:24:53.862 { 00:24:53.862 "method": "bdev_nvme_set_options", 00:24:53.862 "params": { 00:24:53.862 "action_on_timeout": "none", 00:24:53.862 "allow_accel_sequence": false, 00:24:53.862 "arbitration_burst": 0, 00:24:53.862 "bdev_retry_count": 3, 00:24:53.862 "ctrlr_loss_timeout_sec": 0, 00:24:53.862 "delay_cmd_submit": true, 00:24:53.862 "dhchap_dhgroups": [ 00:24:53.862 "null", 00:24:53.862 "ffdhe2048", 00:24:53.862 "ffdhe3072", 00:24:53.862 "ffdhe4096", 00:24:53.862 "ffdhe6144", 00:24:53.862 "ffdhe8192" 00:24:53.862 ], 00:24:53.862 "dhchap_digests": [ 00:24:53.862 "sha256", 00:24:53.862 "sha384", 00:24:53.862 "sha512" 00:24:53.862 ], 00:24:53.862 "disable_auto_failback": false, 00:24:53.862 "fast_io_fail_timeout_sec": 0, 00:24:53.862 "generate_uuids": false, 00:24:53.862 "high_priority_weight": 0, 00:24:53.862 "io_path_stat": false, 00:24:53.862 "io_queue_requests": 512, 00:24:53.862 "keep_alive_timeout_ms": 10000, 00:24:53.862 "low_priority_weight": 0, 00:24:53.862 "medium_priority_weight": 0, 00:24:53.862 "nvme_adminq_poll_period_us": 10000, 00:24:53.862 "nvme_error_stat": false, 00:24:53.862 "nvme_ioq_poll_period_us": 0, 00:24:53.862 "rdma_cm_event_timeout_ms": 0, 00:24:53.862 "rdma_max_cq_size": 0, 00:24:53.862 "rdma_srq_size": 0, 00:24:53.862 "reconnect_delay_sec": 0, 00:24:53.862 "timeout_admin_us": 0, 00:24:53.862 "timeout_us": 0, 00:24:53.862 "transport_ack_timeout": 0, 00:24:53.862 "transport_retry_count": 4, 00:24:53.862 "transport_tos": 0 00:24:53.863 } 00:24:53.863 }, 00:24:53.863 { 00:24:53.863 "method": "bdev_nvme_attach_controller", 00:24:53.863 "params": { 00:24:53.863 "adrfam": "IPv4", 00:24:53.863 "ctrlr_loss_timeout_sec": 0, 00:24:53.863 "ddgst": false, 00:24:53.863 "fast_io_fail_timeout_sec": 0, 00:24:53.863 "hdgst": false, 00:24:53.863 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:53.863 "name": "nvme0", 00:24:53.863 "prchk_guard": false, 00:24:53.863 "prchk_reftag": false, 00:24:53.863 "psk": "key0", 00:24:53.863 "reconnect_delay_sec": 0, 00:24:53.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:53.863 "traddr": "127.0.0.1", 00:24:53.863 "trsvcid": "4420", 00:24:53.863 "trtype": "TCP" 00:24:53.863 } 00:24:53.863 }, 00:24:53.863 { 00:24:53.863 "method": "bdev_nvme_set_hotplug", 00:24:53.863 "params": { 00:24:53.863 "enable": false, 00:24:53.863 "period_us": 100000 00:24:53.863 } 00:24:53.863 }, 00:24:53.863 { 00:24:53.863 "method": "bdev_wait_for_examine" 00:24:53.863 } 00:24:53.863 ] 00:24:53.863 }, 00:24:53.863 { 00:24:53.863 "subsystem": "nbd", 00:24:53.863 "config": [] 00:24:53.863 } 00:24:53.863 ] 00:24:53.863 }' 00:24:53.863 22:19:40 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.863 22:19:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:53.863 [2024-07-15 22:19:40.658919] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:24:53.863 [2024-07-15 22:19:40.659052] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100431 ] 00:24:53.863 [2024-07-15 22:19:40.793695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.119 [2024-07-15 22:19:40.852956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.119 [2024-07-15 22:19:41.004556] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:55.050 22:19:41 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.050 22:19:41 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:55.050 22:19:41 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:24:55.050 22:19:41 keyring_file -- keyring/file.sh@120 -- # jq length 00:24:55.050 22:19:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.050 22:19:41 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:24:55.050 22:19:41 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:24:55.050 22:19:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:55.050 22:19:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:55.050 22:19:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:55.050 22:19:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.050 22:19:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:55.306 22:19:42 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:55.306 22:19:42 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:24:55.306 22:19:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:55.306 22:19:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:55.306 22:19:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:55.306 22:19:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.306 22:19:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:55.562 22:19:42 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:24:55.562 22:19:42 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:24:55.562 22:19:42 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:24:55.562 22:19:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:55.819 22:19:42 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:24:55.819 22:19:42 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:55.819 22:19:42 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.pcp0lpkvqq /tmp/tmp.Efwwx5jSbA 00:24:55.819 22:19:42 keyring_file -- keyring/file.sh@20 -- # killprocess 100431 00:24:55.819 22:19:42 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100431 ']' 00:24:55.819 22:19:42 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100431 00:24:55.819 22:19:42 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:55.819 22:19:42 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:55.819 22:19:42 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100431 00:24:55.819 killing process with pid 100431 00:24:55.819 Received shutdown signal, test time was about 1.000000 seconds 00:24:55.819 00:24:55.819 Latency(us) 00:24:55.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.819 =================================================================================================================== 00:24:55.819 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:55.819 22:19:42 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:55.819 22:19:42 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:55.819 22:19:42 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100431' 00:24:55.819 22:19:42 keyring_file -- common/autotest_common.sh@967 -- # kill 100431 00:24:55.819 22:19:42 keyring_file -- common/autotest_common.sh@972 -- # wait 100431 00:24:56.076 22:19:42 keyring_file -- keyring/file.sh@21 -- # killprocess 99918 00:24:56.076 22:19:42 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99918 ']' 00:24:56.076 22:19:42 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99918 00:24:56.076 22:19:42 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:56.076 22:19:42 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:56.076 22:19:42 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99918 00:24:56.076 killing process with pid 99918 00:24:56.076 22:19:42 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:56.076 22:19:42 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:56.076 22:19:42 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99918' 00:24:56.076 22:19:42 keyring_file -- common/autotest_common.sh@967 -- # kill 99918 00:24:56.076 [2024-07-15 22:19:42.889805] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:56.076 22:19:42 keyring_file -- common/autotest_common.sh@972 -- # wait 99918 00:24:56.333 00:24:56.333 real 0m16.566s 00:24:56.333 user 0m42.175s 00:24:56.333 sys 0m2.994s 00:24:56.333 22:19:43 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:56.333 ************************************ 00:24:56.333 END TEST keyring_file 00:24:56.333 ************************************ 00:24:56.333 22:19:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:56.333 22:19:43 -- common/autotest_common.sh@1142 -- # return 0 00:24:56.333 22:19:43 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:24:56.333 22:19:43 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:56.333 22:19:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:56.333 22:19:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:56.333 22:19:43 -- common/autotest_common.sh@10 -- # set +x 00:24:56.333 ************************************ 00:24:56.333 START TEST keyring_linux 00:24:56.333 ************************************ 00:24:56.333 22:19:43 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:56.333 * Looking for test storage... 00:24:56.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:56.333 22:19:43 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:56.333 22:19:43 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:56.333 22:19:43 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:56.333 22:19:43 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.333 22:19:43 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ff65e169-209e-4b79-b82d-da213c413a29 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=ff65e169-209e-4b79-b82d-da213c413a29 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:56.334 22:19:43 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.334 22:19:43 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.334 22:19:43 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.334 22:19:43 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.334 22:19:43 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.334 22:19:43 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.334 22:19:43 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:56.334 22:19:43 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:56.334 22:19:43 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:56.334 22:19:43 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:56.334 22:19:43 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:56.334 22:19:43 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:56.334 22:19:43 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:56.334 22:19:43 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:56.334 22:19:43 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:56.334 22:19:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:56.334 22:19:43 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:56.334 22:19:43 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:56.334 22:19:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:56.334 22:19:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:56.334 22:19:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:56.334 22:19:43 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:56.590 22:19:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:56.590 /tmp/:spdk-test:key0 00:24:56.590 22:19:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:56.590 22:19:43 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:56.590 22:19:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:56.590 22:19:43 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:56.590 22:19:43 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:56.590 22:19:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:56.591 22:19:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:56.591 22:19:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:56.591 22:19:43 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:56.591 22:19:43 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:56.591 22:19:43 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:56.591 22:19:43 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:56.591 22:19:43 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:56.591 22:19:43 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:56.591 22:19:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:56.591 /tmp/:spdk-test:key1 00:24:56.591 22:19:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:56.591 22:19:43 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100580 00:24:56.591 22:19:43 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:56.591 22:19:43 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100580 00:24:56.591 22:19:43 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100580 ']' 00:24:56.591 22:19:43 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.591 22:19:43 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.591 22:19:43 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.591 22:19:43 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.591 22:19:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:56.591 [2024-07-15 22:19:43.434522] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:24:56.591 [2024-07-15 22:19:43.434622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100580 ] 00:24:56.848 [2024-07-15 22:19:43.574132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.848 [2024-07-15 22:19:43.642831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.780 22:19:44 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.780 22:19:44 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:24:57.780 22:19:44 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:57.780 22:19:44 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.780 22:19:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:57.780 [2024-07-15 22:19:44.399828] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.780 null0 00:24:57.780 [2024-07-15 22:19:44.431784] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:57.780 [2024-07-15 22:19:44.432017] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:57.780 22:19:44 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.780 22:19:44 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:57.780 327255357 00:24:57.780 22:19:44 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:57.780 1059646526 00:24:57.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:57.780 22:19:44 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100615 00:24:57.780 22:19:44 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:57.780 22:19:44 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100615 /var/tmp/bperf.sock 00:24:57.780 22:19:44 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100615 ']' 00:24:57.780 22:19:44 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:57.780 22:19:44 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:57.780 22:19:44 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:57.780 22:19:44 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:57.780 22:19:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:57.780 [2024-07-15 22:19:44.521772] Starting SPDK v24.09-pre git sha1 406b3b1b5 / DPDK 24.03.0 initialization... 00:24:57.780 [2024-07-15 22:19:44.521910] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100615 ] 00:24:57.780 [2024-07-15 22:19:44.660877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.780 [2024-07-15 22:19:44.725661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.741 22:19:45 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.741 22:19:45 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:24:58.741 22:19:45 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:58.741 22:19:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:58.998 22:19:45 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:58.998 22:19:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:59.563 22:19:46 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:59.563 22:19:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:59.563 [2024-07-15 22:19:46.496478] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:59.821 nvme0n1 00:24:59.821 22:19:46 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:59.821 22:19:46 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:59.821 22:19:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:59.821 22:19:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:59.821 22:19:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.821 22:19:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:00.078 22:19:46 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:00.078 22:19:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:00.078 22:19:46 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:00.079 22:19:46 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:00.079 22:19:46 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:00.079 22:19:46 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.079 22:19:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.643 22:19:47 keyring_linux -- keyring/linux.sh@25 -- # sn=327255357 00:25:00.643 22:19:47 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:00.643 22:19:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:00.643 22:19:47 keyring_linux -- keyring/linux.sh@26 -- # [[ 327255357 == \3\2\7\2\5\5\3\5\7 ]] 00:25:00.643 22:19:47 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 327255357 00:25:00.643 22:19:47 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:00.643 22:19:47 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:00.643 Running I/O for 1 seconds... 00:25:01.573 00:25:01.573 Latency(us) 00:25:01.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.573 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:01.573 nvme0n1 : 1.01 11070.41 43.24 0.00 0.00 11513.11 7923.90 18945.86 00:25:01.573 =================================================================================================================== 00:25:01.573 Total : 11070.41 43.24 0.00 0.00 11513.11 7923.90 18945.86 00:25:01.573 0 00:25:01.831 22:19:48 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:01.831 22:19:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:02.089 22:19:48 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:02.089 22:19:48 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:02.089 22:19:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:02.089 22:19:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:02.089 22:19:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:02.089 22:19:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:02.346 22:19:49 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:02.346 22:19:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:02.346 22:19:49 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:02.346 22:19:49 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:02.346 22:19:49 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:25:02.346 22:19:49 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:02.346 22:19:49 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:02.346 22:19:49 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:02.346 22:19:49 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:02.346 22:19:49 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:02.346 22:19:49 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:02.346 22:19:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:02.603 [2024-07-15 22:19:49.337433] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:02.603 [2024-07-15 22:19:49.338048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd86ea0 (107): Transport endpoint is not connected 00:25:02.603 [2024-07-15 22:19:49.339036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd86ea0 (9): Bad file descriptor 00:25:02.603 [2024-07-15 22:19:49.340033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:02.603 [2024-07-15 22:19:49.340056] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:02.603 [2024-07-15 22:19:49.340066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:02.603 2024/07/15 22:19:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:25:02.603 request: 00:25:02.603 { 00:25:02.603 "method": "bdev_nvme_attach_controller", 00:25:02.603 "params": { 00:25:02.603 "name": "nvme0", 00:25:02.603 "trtype": "tcp", 00:25:02.603 "traddr": "127.0.0.1", 00:25:02.603 "adrfam": "ipv4", 00:25:02.603 "trsvcid": "4420", 00:25:02.603 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:02.603 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:02.603 "prchk_reftag": false, 00:25:02.603 "prchk_guard": false, 00:25:02.603 "hdgst": false, 00:25:02.603 "ddgst": false, 00:25:02.603 "psk": ":spdk-test:key1" 00:25:02.603 } 00:25:02.603 } 00:25:02.603 Got JSON-RPC error response 00:25:02.603 GoRPCClient: error on JSON-RPC call 00:25:02.603 22:19:49 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:25:02.603 22:19:49 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:02.603 22:19:49 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:02.603 22:19:49 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@33 -- # sn=327255357 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 327255357 00:25:02.603 1 links removed 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@33 -- # sn=1059646526 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1059646526 00:25:02.603 1 links removed 00:25:02.603 22:19:49 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100615 00:25:02.603 22:19:49 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100615 ']' 00:25:02.603 22:19:49 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100615 00:25:02.603 22:19:49 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:25:02.603 22:19:49 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:02.603 22:19:49 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100615 00:25:02.603 22:19:49 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:02.603 22:19:49 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:02.603 killing process with pid 100615 00:25:02.603 22:19:49 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100615' 00:25:02.603 Received shutdown signal, test time was about 1.000000 seconds 00:25:02.603 00:25:02.603 Latency(us) 00:25:02.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.603 =================================================================================================================== 00:25:02.603 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.603 22:19:49 keyring_linux -- common/autotest_common.sh@967 -- # kill 100615 00:25:02.603 22:19:49 keyring_linux -- common/autotest_common.sh@972 -- # wait 100615 00:25:02.860 22:19:49 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100580 00:25:02.860 22:19:49 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100580 ']' 00:25:02.860 22:19:49 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100580 00:25:02.860 22:19:49 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:25:02.860 22:19:49 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:02.861 22:19:49 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100580 00:25:02.861 22:19:49 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:02.861 22:19:49 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:02.861 killing process with pid 100580 00:25:02.861 22:19:49 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100580' 00:25:02.861 22:19:49 keyring_linux -- common/autotest_common.sh@967 -- # kill 100580 00:25:02.861 22:19:49 keyring_linux -- common/autotest_common.sh@972 -- # wait 100580 00:25:03.119 00:25:03.119 real 0m6.652s 00:25:03.119 user 0m13.736s 00:25:03.119 sys 0m1.517s 00:25:03.119 22:19:49 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:03.119 ************************************ 00:25:03.119 END TEST keyring_linux 00:25:03.119 ************************************ 00:25:03.119 22:19:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:03.119 22:19:49 -- common/autotest_common.sh@1142 -- # return 0 00:25:03.119 22:19:49 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:25:03.119 22:19:49 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:25:03.119 22:19:49 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:25:03.119 22:19:49 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:25:03.119 22:19:49 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:25:03.119 22:19:49 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:25:03.119 22:19:49 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:03.119 22:19:49 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:25:03.119 22:19:49 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:25:03.119 22:19:49 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:03.119 22:19:49 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:25:03.119 22:19:49 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:25:03.119 22:19:49 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:25:03.119 22:19:49 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:25:03.119 22:19:49 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:25:03.119 22:19:49 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:25:03.119 22:19:49 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:25:03.119 22:19:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:03.119 22:19:49 -- common/autotest_common.sh@10 -- # set +x 00:25:03.119 22:19:49 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:25:03.119 22:19:49 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:03.119 22:19:49 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:03.119 22:19:49 -- common/autotest_common.sh@10 -- # set +x 00:25:04.491 INFO: APP EXITING 00:25:04.491 INFO: killing all VMs 00:25:04.491 INFO: killing vhost app 00:25:04.491 INFO: EXIT DONE 00:25:04.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:04.748 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:05.088 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:05.366 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:05.366 Cleaning 00:25:05.366 Removing: /var/run/dpdk/spdk0/config 00:25:05.366 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:05.624 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:05.624 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:05.624 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:05.624 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:05.624 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:05.624 Removing: /var/run/dpdk/spdk1/config 00:25:05.624 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:05.624 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:05.624 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:05.624 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:05.624 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:05.624 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:05.624 Removing: /var/run/dpdk/spdk2/config 00:25:05.624 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:05.624 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:05.624 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:05.624 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:05.624 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:05.624 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:05.624 Removing: /var/run/dpdk/spdk3/config 00:25:05.624 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:05.624 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:05.624 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:05.624 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:05.624 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:05.624 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:05.624 Removing: /var/run/dpdk/spdk4/config 00:25:05.624 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:05.624 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:05.624 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:05.624 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:05.624 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:05.624 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:05.624 Removing: /dev/shm/nvmf_trace.0 00:25:05.624 Removing: /dev/shm/spdk_tgt_trace.pid60716 00:25:05.624 Removing: /var/run/dpdk/spdk0 00:25:05.624 Removing: /var/run/dpdk/spdk1 00:25:05.624 Removing: /var/run/dpdk/spdk2 00:25:05.624 Removing: /var/run/dpdk/spdk3 00:25:05.624 Removing: /var/run/dpdk/spdk4 00:25:05.624 Removing: /var/run/dpdk/spdk_pid100431 00:25:05.624 Removing: /var/run/dpdk/spdk_pid100580 00:25:05.624 Removing: /var/run/dpdk/spdk_pid100615 00:25:05.624 Removing: /var/run/dpdk/spdk_pid60582 00:25:05.624 Removing: /var/run/dpdk/spdk_pid60716 00:25:05.624 Removing: /var/run/dpdk/spdk_pid60977 00:25:05.624 Removing: /var/run/dpdk/spdk_pid61064 00:25:05.624 Removing: /var/run/dpdk/spdk_pid61090 00:25:05.624 Removing: /var/run/dpdk/spdk_pid61200 00:25:05.624 Removing: /var/run/dpdk/spdk_pid61216 00:25:05.624 Removing: /var/run/dpdk/spdk_pid61334 00:25:05.624 Removing: /var/run/dpdk/spdk_pid61614 00:25:05.624 Removing: /var/run/dpdk/spdk_pid61785 00:25:05.624 Removing: /var/run/dpdk/spdk_pid61861 00:25:05.624 Removing: /var/run/dpdk/spdk_pid61953 00:25:05.624 Removing: /var/run/dpdk/spdk_pid62043 00:25:05.624 Removing: /var/run/dpdk/spdk_pid62076 00:25:05.624 Removing: /var/run/dpdk/spdk_pid62111 00:25:05.624 Removing: /var/run/dpdk/spdk_pid62173 00:25:05.624 Removing: /var/run/dpdk/spdk_pid62271 00:25:05.624 Removing: /var/run/dpdk/spdk_pid62889 00:25:05.624 Removing: /var/run/dpdk/spdk_pid62953 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63022 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63050 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63124 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63152 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63225 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63240 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63291 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63322 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63368 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63398 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63545 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63580 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63655 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63724 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63749 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63806 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63839 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63871 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63905 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63942 00:25:05.624 Removing: /var/run/dpdk/spdk_pid63971 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64010 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64040 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64069 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64109 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64138 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64174 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64209 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64238 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64278 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64307 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64342 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64379 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64417 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64451 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64481 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64551 00:25:05.624 Removing: /var/run/dpdk/spdk_pid64644 00:25:05.624 Removing: /var/run/dpdk/spdk_pid65037 00:25:05.882 Removing: /var/run/dpdk/spdk_pid68340 00:25:05.882 Removing: /var/run/dpdk/spdk_pid68686 00:25:05.882 Removing: /var/run/dpdk/spdk_pid71012 00:25:05.882 Removing: /var/run/dpdk/spdk_pid71382 00:25:05.882 Removing: /var/run/dpdk/spdk_pid71652 00:25:05.882 Removing: /var/run/dpdk/spdk_pid71703 00:25:05.882 Removing: /var/run/dpdk/spdk_pid72329 00:25:05.882 Removing: /var/run/dpdk/spdk_pid72754 00:25:05.882 Removing: /var/run/dpdk/spdk_pid72800 00:25:05.882 Removing: /var/run/dpdk/spdk_pid73156 00:25:05.882 Removing: /var/run/dpdk/spdk_pid73676 00:25:05.882 Removing: /var/run/dpdk/spdk_pid74120 00:25:05.882 Removing: /var/run/dpdk/spdk_pid75023 00:25:05.882 Removing: /var/run/dpdk/spdk_pid76040 00:25:05.882 Removing: /var/run/dpdk/spdk_pid76153 00:25:05.882 Removing: /var/run/dpdk/spdk_pid76221 00:25:05.882 Removing: /var/run/dpdk/spdk_pid77685 00:25:05.882 Removing: /var/run/dpdk/spdk_pid77898 00:25:05.882 Removing: /var/run/dpdk/spdk_pid83348 00:25:05.882 Removing: /var/run/dpdk/spdk_pid83795 00:25:05.882 Removing: /var/run/dpdk/spdk_pid83902 00:25:05.882 Removing: /var/run/dpdk/spdk_pid84050 00:25:05.882 Removing: /var/run/dpdk/spdk_pid84078 00:25:05.882 Removing: /var/run/dpdk/spdk_pid84110 00:25:05.882 Removing: /var/run/dpdk/spdk_pid84142 00:25:05.882 Removing: /var/run/dpdk/spdk_pid84287 00:25:05.882 Removing: /var/run/dpdk/spdk_pid84434 00:25:05.882 Removing: /var/run/dpdk/spdk_pid84672 00:25:05.882 Removing: /var/run/dpdk/spdk_pid84782 00:25:05.882 Removing: /var/run/dpdk/spdk_pid85041 00:25:05.882 Removing: /var/run/dpdk/spdk_pid85138 00:25:05.882 Removing: /var/run/dpdk/spdk_pid85255 00:25:05.882 Removing: /var/run/dpdk/spdk_pid85593 00:25:05.882 Removing: /var/run/dpdk/spdk_pid85994 00:25:05.882 Removing: /var/run/dpdk/spdk_pid86282 00:25:05.882 Removing: /var/run/dpdk/spdk_pid86746 00:25:05.882 Removing: /var/run/dpdk/spdk_pid86748 00:25:05.882 Removing: /var/run/dpdk/spdk_pid87071 00:25:05.882 Removing: /var/run/dpdk/spdk_pid87089 00:25:05.883 Removing: /var/run/dpdk/spdk_pid87110 00:25:05.883 Removing: /var/run/dpdk/spdk_pid87135 00:25:05.883 Removing: /var/run/dpdk/spdk_pid87140 00:25:05.883 Removing: /var/run/dpdk/spdk_pid87493 00:25:05.883 Removing: /var/run/dpdk/spdk_pid87540 00:25:05.883 Removing: /var/run/dpdk/spdk_pid87877 00:25:05.883 Removing: /var/run/dpdk/spdk_pid88114 00:25:05.883 Removing: /var/run/dpdk/spdk_pid88582 00:25:05.883 Removing: /var/run/dpdk/spdk_pid89166 00:25:05.883 Removing: /var/run/dpdk/spdk_pid90514 00:25:05.883 Removing: /var/run/dpdk/spdk_pid91107 00:25:05.883 Removing: /var/run/dpdk/spdk_pid91109 00:25:05.883 Removing: /var/run/dpdk/spdk_pid93055 00:25:05.883 Removing: /var/run/dpdk/spdk_pid93145 00:25:05.883 Removing: /var/run/dpdk/spdk_pid93236 00:25:05.883 Removing: /var/run/dpdk/spdk_pid93307 00:25:05.883 Removing: /var/run/dpdk/spdk_pid93452 00:25:05.883 Removing: /var/run/dpdk/spdk_pid93529 00:25:05.883 Removing: /var/run/dpdk/spdk_pid93600 00:25:05.883 Removing: /var/run/dpdk/spdk_pid93695 00:25:05.883 Removing: /var/run/dpdk/spdk_pid94008 00:25:05.883 Removing: /var/run/dpdk/spdk_pid94694 00:25:05.883 Removing: /var/run/dpdk/spdk_pid96038 00:25:05.883 Removing: /var/run/dpdk/spdk_pid96242 00:25:05.883 Removing: /var/run/dpdk/spdk_pid96530 00:25:05.883 Removing: /var/run/dpdk/spdk_pid96824 00:25:05.883 Removing: /var/run/dpdk/spdk_pid97358 00:25:05.883 Removing: /var/run/dpdk/spdk_pid97363 00:25:05.883 Removing: /var/run/dpdk/spdk_pid97709 00:25:05.883 Removing: /var/run/dpdk/spdk_pid97863 00:25:05.883 Removing: /var/run/dpdk/spdk_pid98020 00:25:05.883 Removing: /var/run/dpdk/spdk_pid98116 00:25:05.883 Removing: /var/run/dpdk/spdk_pid98357 00:25:05.883 Removing: /var/run/dpdk/spdk_pid98466 00:25:05.883 Removing: /var/run/dpdk/spdk_pid99115 00:25:05.883 Removing: /var/run/dpdk/spdk_pid99145 00:25:05.883 Removing: /var/run/dpdk/spdk_pid99180 00:25:05.883 Removing: /var/run/dpdk/spdk_pid99428 00:25:05.883 Removing: /var/run/dpdk/spdk_pid99464 00:25:05.883 Removing: /var/run/dpdk/spdk_pid99494 00:25:05.883 Removing: /var/run/dpdk/spdk_pid99918 00:25:05.883 Removing: /var/run/dpdk/spdk_pid99953 00:25:05.883 Clean 00:25:05.883 22:19:52 -- common/autotest_common.sh@1451 -- # return 0 00:25:05.883 22:19:52 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:25:05.883 22:19:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:05.883 22:19:52 -- common/autotest_common.sh@10 -- # set +x 00:25:06.141 22:19:52 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:25:06.141 22:19:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:06.141 22:19:52 -- common/autotest_common.sh@10 -- # set +x 00:25:06.141 22:19:52 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:06.141 22:19:52 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:06.141 22:19:52 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:06.141 22:19:52 -- spdk/autotest.sh@391 -- # hash lcov 00:25:06.141 22:19:52 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:06.141 22:19:52 -- spdk/autotest.sh@393 -- # hostname 00:25:06.141 22:19:52 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:06.141 geninfo: WARNING: invalid characters removed from testname! 00:25:38.215 22:20:20 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:38.215 22:20:24 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:40.749 22:20:27 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:43.275 22:20:30 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:46.552 22:20:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:49.079 22:20:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:52.384 22:20:38 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:52.384 22:20:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:52.384 22:20:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:52.384 22:20:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.384 22:20:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.384 22:20:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.384 22:20:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.384 22:20:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.384 22:20:38 -- paths/export.sh@5 -- $ export PATH 00:25:52.384 22:20:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.384 22:20:38 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:52.384 22:20:38 -- common/autobuild_common.sh@444 -- $ date +%s 00:25:52.384 22:20:38 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721082038.XXXXXX 00:25:52.384 22:20:38 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721082038.WPnfzb 00:25:52.384 22:20:38 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:25:52.384 22:20:38 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:25:52.384 22:20:38 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:25:52.384 22:20:38 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:52.384 22:20:38 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:52.384 22:20:38 -- common/autobuild_common.sh@460 -- $ get_config_params 00:25:52.384 22:20:38 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:25:52.384 22:20:38 -- common/autotest_common.sh@10 -- $ set +x 00:25:52.384 22:20:38 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:25:52.384 22:20:38 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:25:52.384 22:20:38 -- pm/common@17 -- $ local monitor 00:25:52.384 22:20:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:52.384 22:20:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:52.384 22:20:38 -- pm/common@25 -- $ sleep 1 00:25:52.384 22:20:38 -- pm/common@21 -- $ date +%s 00:25:52.384 22:20:38 -- pm/common@21 -- $ date +%s 00:25:52.384 22:20:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721082038 00:25:52.385 22:20:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721082038 00:25:52.385 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721082038_collect-vmstat.pm.log 00:25:52.385 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721082038_collect-cpu-load.pm.log 00:25:52.950 22:20:39 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:25:52.950 22:20:39 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:25:52.950 22:20:39 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:25:52.950 22:20:39 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:52.950 22:20:39 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:52.950 22:20:39 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:52.950 22:20:39 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:52.950 22:20:39 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:52.950 22:20:39 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:52.950 22:20:39 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:52.950 22:20:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:52.950 22:20:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:52.950 22:20:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:52.950 22:20:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:52.950 22:20:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:53.208 22:20:39 -- pm/common@44 -- $ pid=102312 00:25:53.208 22:20:39 -- pm/common@50 -- $ kill -TERM 102312 00:25:53.208 22:20:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:53.208 22:20:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:53.208 22:20:39 -- pm/common@44 -- $ pid=102314 00:25:53.208 22:20:39 -- pm/common@50 -- $ kill -TERM 102314 00:25:53.208 + [[ -n 5176 ]] 00:25:53.208 + sudo kill 5176 00:25:53.217 [Pipeline] } 00:25:53.236 [Pipeline] // timeout 00:25:53.242 [Pipeline] } 00:25:53.256 [Pipeline] // stage 00:25:53.261 [Pipeline] } 00:25:53.273 [Pipeline] // catchError 00:25:53.280 [Pipeline] stage 00:25:53.281 [Pipeline] { (Stop VM) 00:25:53.291 [Pipeline] sh 00:25:53.564 + vagrant halt 00:25:57.770 ==> default: Halting domain... 00:26:03.039 [Pipeline] sh 00:26:03.353 + vagrant destroy -f 00:26:07.537 ==> default: Removing domain... 00:26:07.547 [Pipeline] sh 00:26:07.825 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:26:07.835 [Pipeline] } 00:26:07.857 [Pipeline] // stage 00:26:07.862 [Pipeline] } 00:26:07.883 [Pipeline] // dir 00:26:07.889 [Pipeline] } 00:26:07.906 [Pipeline] // wrap 00:26:07.912 [Pipeline] } 00:26:07.927 [Pipeline] // catchError 00:26:07.934 [Pipeline] stage 00:26:07.936 [Pipeline] { (Epilogue) 00:26:07.947 [Pipeline] sh 00:26:08.225 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:14.791 [Pipeline] catchError 00:26:14.793 [Pipeline] { 00:26:14.808 [Pipeline] sh 00:26:15.087 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:15.345 Artifacts sizes are good 00:26:15.353 [Pipeline] } 00:26:15.368 [Pipeline] // catchError 00:26:15.377 [Pipeline] archiveArtifacts 00:26:15.382 Archiving artifacts 00:26:15.592 [Pipeline] cleanWs 00:26:15.601 [WS-CLEANUP] Deleting project workspace... 00:26:15.601 [WS-CLEANUP] Deferred wipeout is used... 00:26:15.605 [WS-CLEANUP] done 00:26:15.607 [Pipeline] } 00:26:15.619 [Pipeline] // stage 00:26:15.624 [Pipeline] } 00:26:15.636 [Pipeline] // node 00:26:15.641 [Pipeline] End of Pipeline 00:26:15.758 Finished: SUCCESS